I'm a creative software engineer who loves to work on challenging problems supporting researchers, scientists, and engineers. I focus on user interfaces and frontend software engineering using human-centered design approach. Specifically, I apply software engineering to research in assistive technology, machine learning, computer vision, computer graphics, and data visualizations.
Currently, I am a Research and Development Engineer at the UCSF Chang Lab. I support neuroscientists and engineers on the Brain-Computer Interface Restoration of Arm and Voice (BRAVO) clinical trial.
In May 2023, I graduated from the UC Berkeley Master of Engineering program in Electrical Engineering and Computer Sciences. For my year-long capstone project, my team and I built an application that translates American Sign Language video to English text in real time; created a realistic drone simulation in Unreal Engine 5; designed and built a fire visualization in Unity for the California Caldor Fire, and other technical projects in computer graphics and visual computing. I completed two certificates through the Fung Institute for Engineering Leadership: 1) Communications Mentorship and 2) Affinity Group Leader in Human-Computer Interaction.
Until 2022, I worked as a Software Engineer at NASA Ames Research Center. I designed and built user interfaces that brought complex data to life, including spacewalk data and the air quality on the International Space Station.
Through these experiences, I am deepening my understanding of what makes a user interaction intuitive. From visual components, to layouts, to underlying code, my goal is to create products with seamless experiences. I hope to make the world a more inclusive place, one product at a time.
Topics I'm interested in outside of engineering are art and diversity in tech. As part of the Berkeley Master of Engineering program, I wrote an op-ed about the state of diversity in tech and strategies to increase diverse representation, published on the Fung Institute Medium website. In my free time, I spend time with family, read, paint, and play video games.
As part of the Master of Engineering program at UC Berkeley, I and a team of 2 graduate students built a web prototype of an application that translates American Sign Language into English text, at a word level.
The rise of popularity in virtual meetings have left Deaf and hard of hearing people out of the conversation. An existing solution to join online meetings is to hire an in-person translator; however, translators require time and money. We developed an alternative no-cost solution. We conducted user research and gathered feedback from over 120 American Sign Language (ASL) users. Based on our findings, we designed and built the Sign Language Assistant for Meetings (SLAM) application. The application is used as a real-time, ASL to English text translator. Users can download SLAM as an extension to their Zoom meetings. Users sign ASL into the built-in computer camera and that information is sent to the backend. The backend utilizes the Sign pose-based transformer for word-level sign language recognition (SPOTER) machine learning model to detect key points on the user’s hands, face, and body. We trained the model to recognize over 1000 signs and output the corresponding English translation with the highest statistical probability. The final translation is displayed as captions for the other members of the virtual meeting, integrated into the Zoom client. For future work, SLAM can be expanded to translate spoken English to ASL.
I was responsible for gathering user feedback, obtaining the data set, training and integrating the machine learning model, video data extraction and processing, end-to-end application testing, frontend and backend code integration, and deployment of the full-stack infrastructure to the cloud servers.
The demo of the application running on Zoom is below, showing three team members signing into their web cameras and translation being shown in the text box next to the video, in real time.
Team members: Irina Hallinan (EECS), Isadora Smith (BIOE), and Ashley Zhang (EECS), UC Berkeley, MEng program
Academic advisor: Prof. Brian Barsky (EECS), Assistive Technology for Cursor Control group
The poster describing the project can be viewed here.
The paper describing technical details can be viewed here.
As part of the Advanced Computer Graphics class at Rice University, I and a team of students built a therapeutic Wii board balance game. The goal of the interactive game was to help kids practice balancing and moving using a low-cost at-home device. We used C++ OpenGL to build the game and collaborated closely with an engineer at the Motion Lab at the Shriners Hospital for Children in Houston.
I was responsible for the Wii hardware API integration with the frontend OpenGL code and game prototype testings with a patient at the hospital.
Paper describing the project can be seen here.
News articles can be seen here: engadget and Rice News.
Video describing the project can be seen below.
Game development team: Jesus Cortez, Irina [Patrikeeva] Hallinan, Nick Zhu, and Jennifer Humphreys.
Engineering team: Michelle Pyle, Drew Berger, and Matt Jones.
As part of the Computer Graphics course at UC Berkeley, I and a team of 3 students built a 3D drone flight simulator that incorporates open-source data from Google Earth to path plan in realistic world scenarios. The simulation demo is built in a free game engine Unreal Engine 5. For the purposes of this project, we focused on 1 scenario. The demo includes comparing different path-planning algorithms to select the best flight path. We added realism to the simulation such as crash mode when the drone flies into an obstacle. The output is a video of the virtual camera following the drone path that shows a realistic simulated 3D world and paths planned around obstacles through certain points in space.
I was responsible for obtaining the 3D scene model from Google Earth, integration of the scene model and drone model in Unreal Engine, and animation of the failure mode of the drone crashing into obstacles.
Paper describing the technical details can be accessed here.
Additional information about the project can be seen on the project website.
Other projects from the class can be seen here.
As part of the Computer Graphics course at UC Berkeley, in this project, I implemented cloth simulation, using physics-based mass and spring system. I simulated different properties of the cloth such as density, spring constant, and damping. I also learned to implement simple shaders using GLSL language for GPU rendering. The masses are represented as point masses which store locations in 3D.
Details can be seen on the project website.
Other projects from the class can be seen here.
As part of the Computer Graphics course at UC Berkeley, my partner and I implemented physics-based rendering using path-tracing techniques. Specifically, we generated a ray from the virtual camera and implemented primitives-ray intersections like sphere-ray and triangle-ray. Then, we implemented an optimization based on a Bounding Volume Hierarchy representation of objects to render our scenes more efficiently. We calculated direct illumination using Monte Carlo Estimator, which happens when light goes from a light source directly into the camera or bounces off an object and then goes into the camera. Afterwards, we added global illumination by recursively bouncing rays of light until they probabilistically stop with the Russian Roulette algorithm. Finally, we implemented adaptive sampling to render images with less noise.
Details can be seen on the project website.
We also implemented advanced ray-tracing features, such as mirror and glass materials. The details of the advanced features can seen on this website.
Other projects from the class can be seen here.
As part of the Computer Graphics course at UC Berkeley, my partner and I implemented geometric modeling techniques. First, we implemented de Casteljau algorithm to build Bezier curves and surfaces. Second, we implemented local and global operations on a triangular mesh from smooth shading to mesh subdivisions, using the half-edge data structure. Specifically, we implemented Phong shading using vertex normals, and we implemented edge flip and edge split as local mesh operations. As a global mesh operation, we implemented the Loop subdivision algorithm for mesh upsampling.
Details can be seen on the project website.
Other projects from the class can be seen here.
In this project, I trained three neural networks to extract information about fashion photographs. Then I created a morph timeline video from the photographs and predicted labels, which shows the progression of fashion through each decade alongside historical context. The historical context was extracted using Natural Language Processing machine learning techniques. In the end, the video of both the extracted clothing information, the morph of each photograph into the next, and historical data are presented side-by-side, so that the viewer can draw their own conclusions if history has any affect on fashion. More broadly, the video presents both the changes in fashion and major historical events throughout the decades.
The paper describing the technical details of the project can be viewed here.
Presentation slides accompanying the video can be seen here.
The complete morph video can be viewed on YouTube.
Details can be seen on the project website.
In this project, I implemented a morphing algorithm to change an image of a face into an image of another face in both shape (geometry) and color (pixel values). The algorithm relies on manually selected correspondence points in each face. I applied this technique to morph the face of Audrey Hepburn to the face of Marilyn Monroe. I also calculated the "mean face" of a data set of Danish computer scientists. Having these tools in hand, I created caricatures of my face, based on average Danish face. Additionally, I changed my own ethnicity by making my face more Vietnamese, made a video of my face morphing into another student's face, and finally made a music video of characters morphing into each other based on my favorite video game.
The video of Audrey Hepburn morphing into Marilyn Monroe can be viewed on YouTube.
The video of changing my ethnicity to be more Vietnamese can be viewed on YouTube.
The video of League of Legends characters morphing into each other can be viewed on YouTube.
Details can be seen on the project website.
For the first part of this project, I took photograph of places that I thought would be interesting to stitch into a mosaic, which is an approximation of a photograph taken by a single large field-of-view camera. Moreover, panoramic mosaics can span more than 180 degrees. I chose three sets of images in indoor and outdoor settings, picking locations that have many specific details and horizontal and vertical lines to help with picking correspondence points. For the second part of the project, I implemented auto-stitching of image mosaics, based on the paper by M. Brown et al., "Multi-Image Matching using Multi-Scale Oriented Patches" (2005). I used the same photos in both parts to test the automatic feature detection.
Details can be seen on the project website.
FireVizVR aims to help people understand the spread of wildfire in an immediate way via a virtual reality (VR) application. The aim of this project is to address the problem of reconstructing historical fire from publicly available data by visualizing its spread over time in the immersive medium of virtual reality. Our application combines multiple sources of publicly available fire data in a single VR experience for an immersive and interactive wildfire visualization. For the proof of concept, we focus on the Caldor Fire, which occurred in August 2021 in El Dorado county, California. The Caldor Fire was one of the largest fires recorded in California wildfire history. Hence, there was a large amount of data related to this fire. In the future, our application can extend to other wildfires given the requisite data.
I was responsible for gathering and processing video/photo data from existing web sources using Python, and implementing the video/photo assets in Unity. Additionally, I implemented an algorithm to show the burnt areas based on given fire perimeter in Python to generate daily 2D maps with burnt areas. Finally, I created the controller user interface asset in Unity. I collaborated with two other students on integrated parts of the application together, testing the prototype, creating the poster and video (shown below), and collecting user feedback.
The poster with technical summary of the project can be viewed here.
The demo of the FireVizVR application can be viewed on YouTube.
Details can be seen on the project website.
In this project, I built several version of a dice game, Midnight (1-4-24) in Unity. I built this game as a 3D computer game, AR game for mobile phones, and an Oculus Quest 2 game. The code for each game was similar. The different interactions had to be adjusted depending on whether the player clicks with a computer mouse, taps on a phone screen, or interacts with the game via the Quest controllers.
The demo of the VR application can be viewed on YouTube.
As part of the User Interface Design and Development course at UC Berkeley, I and a team of 4 students created ByteBusters, a prototype of a web scam detector. The goal of our app is to help people stay safe on the internet by detecting suspicious messages as scam. We used OpenAI ChatGPT 3.5 to check the message for scam signs, which returns a scam likelihood score and reasons of why the message is or is not a scam. We used Figma to develop wireframes and Next.js to develop the interface.
I was responsible for user research, designing the home page, and the integration of home page with ChatGPT API.
Video describing the project in details can be seen below.
Poster with details of the project can be seen here.
Nartwhal is a search and discovery engine for art exhibits. Nartwhal helps people of all backgrounds discover art exhibits based on their preferred location and customizable art preferences. I designed and built the application prototype using Adobe Creative Cloud tools, including XD, Illustrator, and inDesign. This project was the final course in the 8-course Interaction Design specialization by the University of California, San Diego, on Coursera.
The visual design identity for all web-enabled devices and print media can be viewed here.
The demo of the application visual prototype can be viewed on YouTube.
International Space Station (ISS) Particle Database website is a data-driven interactive website that displays air quality data, gathered during sampling experiments in 2016 and 2018. This public-facing website allows users to easily plot and visualize data to learn more about the air quality in an enclosed environment. This platform allows for plotting and visualization of particle composition, geometry, morphology, sampling durations, and collection locations. The tool features an elemental composition pie chart, a plot tool, and an interactive plot tool, in addition to relevant resources and publications. I was responsible for design and development of the website user interface and deployment, using R Shiny frontend library, JavaScript, HTML, and CSS. Furthermore, I collaborated with the NASA Ames Cloud Team to deploy the website using RStudio.
The full website can be seen here.
Details about the website and its demonstration can be viewed here.
NASA Academic Mission Services (NAMS) is a modern website and content management platform that showcases research capabilities and programs of the contract. I designed and developed user interface for the website and collaborated on the website content with program managers, scientists, and engineers, using the MURA backend, and JavaScript, HTML, and CSS on the frontend.
The full website can be seen here.
NASA Insight is a web platform that integrates, aggregates and analyzes data from various technical and managerial data sources within NASA. Used by several NASA teams including Extravehicular Activity (EVA) office at Johnson Space Center, office of Safety and Mission Assurance at Stennis Space Center, and Information Resource Directorate at Johnson Space Center, among others. I designed and developed multiple services and API's that post-process and display data using JavaScript node.js and Python. I also created numerous user interfaces, including search and view pages, as well as visual style guides to support coherent visual style among the platform's modular parts, using JavaScript Koa.js and Express frameworks, HTML and CSS. Furthermore, I was responsible for platform deployment onto Amazon Web Services using Docker containers, in collaboration with the NASA Cloud Team.
The Quantum website is a modern web platform and content management system that features educational materials, news, and team information. I designed the frontend of the website. I collaborated with the engineering team on the development of the interactive features, including educational platform, search, and automatic newsletter generation, using Laravel PHP as the backend framework, MySQL as the database, Python and JavaScript for scripts, Blade HTML and Tailwind CSS for styling, and Docker containers. I was responsible for website deployment using Amazon Web Services.
Full website can be seen here.
Research Institute for Advanced Computer Science (RIACS) website is a modern website and content management system that showcases the institute's history and current research capabilities. I designed and developed the frontend of the website using JavaScript, HTML, and CSS, and created the new website logo. I collaborated with the RIACS director, program managers, and designer on the content and look of the site, and used the MURA backend to add content.
Full website can be seen here.