Here's my Resume!

Education

B.S., Electrical Engineering and Computer Science

University of California, Berkeley May 2023

I am a final year EECS student, particularly interested in robotics, computer vision, and machine learning. I have been conducting research in robotics with AutoLab, and am excited to continue this research in a graduate program next year.

Some interesting classes I have taken include Object Oriented Analysis and Design, Data Structures and Algorithms, Intro Robotics, Signals and Systems, Intro AI, Physics, Discrete Math, and Probability and Random Processes.

Work Experience

UC Berkeley AutoLab

My focus has been in surgical and industrial robotics, exploring how automation in these areas can complement our own abilities to perform tasks better than we could on our own.

I co-first authored the paper Digital Twin Framework for Telesurgery in the Presence of Varying Network QoS (link), which we presented at CASE 2022 in Mexico City. I created robust control and feedback system to enable telesurgery over lossy low-bandwidth networks, as often seen in battlefields and in space, and integrated the SRI Taurus VR simulator with the dVRK surgical robot to implement closed-loop controls for FLS Peg Transfer, a basic training task for laparoscopic surgery. I then planned and executed hundreds of autonomous and semi-autonomous trials to validate the performance of our digital twin.
I also worked on a shunt insertion project, helping write Automating Vascular Shunt Insertion with the dVRK Surgical Robot, which is under review for ICRA 2023. I helped develop the perception pipeline and construct the learning-based visual servoing algorithm to reliably perform shunt insertion.

On the industrial side, I worked on a multi-object grasping project, and helped write Learning to Efficiently Plan Robust Frictional Multi-Object Grasps, which is also under review for ICRA 2023. I worked on the perception and manipulation pipelines, using OpenCV and the UR5 motion libraries, respectively. I collected data to train our grasp prediction neural network, and ran trials to validate the performance of our method.
I am currently working on an extension to this project involving tableware such as cups, bowls, plates, and utensils, hoping to learn robust multi-object grasps to efficiently clear these objects from the workspace.

Benchling

I designed, deployed, and validated performant algorithms and efficient data models to revamp the permissions framework. This involved refactoring the data model to implement transitive closure for ancestry, and building an optimized algorithm to traverse the data model. I organized the dual write and backfill of the new data model to migrate hundreds of active source-of-truth tables, and architected asynchronous sampling, execution, and logging systems to performance test new permission checks.

For the intern hackathon, I wrote Python and JavaScript linters to enforce codebase consistency and flag inefficient programming practices, and published these to the production codebase.

ServiceNow

I developed an API for global GraphQL search across all data sources for our alert dashboard, ensuring this API was robust to schema alterations. I integrated this API with a custom React-based frontend featuring incremental predictive search, modeled after Spotlight Search on Mac.

I built user presence indicators for the alert dashboard, inspired by the indicators on Google Docs, and implemented database listener channels in the backend for real-time presence and status updates. These bubbles would show when another user opened the incident page, when they became inactive, and when they left. This was implemented with a React-based web component frontend and a GraphQL backend.

Next Capital Tech

I used quantum computing to develop advanced regression models that were faster and more accurate than their classical counterparts, leading to more useful predictions about future market conditions.

I also developed a quantum reinforcement learning agent that used the power of quantum computing to exponentially speed up the pathfinding step of the q-learning model, allowing for faster training times on datasets with many features. The model took advantage of its fast training time to continually retrain itself at regular intervals, keeping up to date with trends in the latest data.

Kaydoh

I revamped the existing WebCMA chatbot using BotEngine and natural language processing to create conversational interactions that allowed real estate agents to generate Competitive Market Analysis reports from the web platform with ease.

I added report generation and sharing features to the web dashboard, allowing agents to sync the reports generated through the chatbot to a more permanent setup to share with clients and store for later reference. This was built with a robust Amazon EC2 backend to manage syncing and sharing reports and to ensure reliable service.

I also migrated the existing chat bot to Amazon Alexa to enable agents to create and send CMA reports using only their voice, reducing report generation time by 80%.

Certifications

Motion Planning for Self Driving Cars

I completed Motion Planning for Self Driving Cars (course 4 of 4 in the Self Driving Car Specialization) on Coursera with a final grade of 93%.

The course covers static occupancy grids, trajectory rollout for static collision checks, high-level mission planning using A* search on road network graphs, mid-level behavior planning using finite state machines, and low-level local planning using dynamic vehicle models and optimizers to generate smooth and efficient paths.

The final project involves developing a state machine behavior planner that guides the car through a stop sign, a local planner that drives the car safely and efficiently by generating potential paths, checking for collisions, and selecting the optimal path, and a velocity profile generator that incorporates the vehicle's dynamic motion models to convert the path into a series of [x, y, theta, vel] waypoints to pass to the vehicle controller.

All my code from exercises and assignments can be found here.
My certification ID is CS6EE9DY2ACW, which you can verify here.

Visual Perception for Self Driving Cars

I completed Visual Perception for Self Driving Cars (course 3 of 4 in the Self Driving Car Specialization) on Coursera with a final grade of 91%.

The course covers intrinsic and extrinsic camera calibration, feature detection and matching, object tracking, visual odometry, feedforward neural networks, object detection, and semantic segmentation.

The final project involves using a depth map and semantic segmentation combined with camera input to estimate drivable space in 3D, identify lane lines, and perform collision detection with other cars on the road.

All my code from exercises and assignments can be found here.
My certification ID is 6EWJDHD2HHAH, which you can verify here.

State Estimation and Localization for Self Driving Cars

I completed State Estimation and Localization for Self Driving Cars (course 2 of 4 in the Self Driving Car Specialization) on Coursera with a final grade of 100%.

The course covers standard and recursive least squares estimation, linear and nonlinear Kalman filters, pose estimation from GPS and IMU data, and localization and motion tracking through LiDAR point clouds.

The final project involves combining GPS, IMU, and LiDAR data through an error-state extended Kalman filter to estimate the pose of the vehicle in real time, accounting for sensor inaccuracies and drop outs.

All my code from exercises and assignments can be found here.
My certification ID is 7ZHRPNTRXKBZ, which you can verify here.

Introduction to Self Driving Cars

I completed Introduction to Self Driving Cars (course 1 of 4 in the Self Driving Car Specialization) on Coursera with a final grade of 98%.

The course covers the requirements and challenges for designing self driving cars, common hardware and software architectures, safety concerns, the dynamic vehicle model, and lateral and longitudinal control.

The final project involves designing lateral and longitudinal PID controllers capable of tracking a specified velocity profile to autonomously drive the car around a virtual racetrack.

All my code from exercises and assignments can be found here.
My certification ID is 255MSF793TNG, which you can verify here.

Skills

These are the programming languages and platforms I am most familiar with and have the most experience in. However, I am quick and eager to learn new things, and enjoy going beyond my comfort zone to understand a concept or figure something out.

  • Python
  • C++
  • Java
  • Linux Bash Scripting
  • JavaScript
  • Android Development
  • HTML & CSS

In addition, I have developed projects in which I've done significant work with computer vision, sensor fusion, 3D data processing, supervised, unsupervised, and reinforcement learning, deep learning, physics simulations, data analytics, and data visualization.