SpotFinder is an Android application that guides users to the closest empty parking spot in a parking lot or structure in order to reduce time spent looking for a place to park.
Joint Motion is a different take on pathfinding using machine learning to get a multi-jointed arm to reach a specified target point. The arm, which the user can define by specifying the number of segments and the length of each segment, searches the state space using binary search, aligning itself to the position that minimizes the distance from its tip to the target point. Inspired by the arm on our robot in high school, this simulation is applicable to situations where an arm is tasked with reaching a specific point in space.
Python - used to code the whole simulation, chosen for ease of use and accessibility to numpy and matplotlib modules
Error calculation - The error is the Euclidean distance between the tip of the arm and the target point. Each arm segment is modeled as a vector to simplify the calculations of the endpoint of each segment, the movement of all segments attached to any given segment, and the tip of the arm in space. The arm originates at the origin to simplify the numbers, and a graph of iterations vs errors is shown once the arm is in place.
Binary search - At each iteration, each segment of the arm rotates a certain number of degrees about its fulcrum, taking all subsequent segments along with it, and the tip position is noted, and the error is calculated. The configuration with the minimum error is stored, the arm goes to that position, and the next iteration starts from there. The angle of rotation is halved at each iteration, starting from 180 degrees at the start. The search stops when either the error has been 0 for three iterations or when the maximum number of iterations is reached.
This project was a submission to the Covid-19 Forecasting Challenge set up by Kaggle in the early days of the lockdown to crowdsource predictions about the rise of cases and deaths in different regions, hoping to inform politicians and healthcare workers about the numbers they might be dealing with.
While best-fit scores and prediction accuracies varied between data sets, the MLP regressor tended to have the best fit to existing data and be the most accurate when compared against new case data days later.
RL Gridworld is an experiment in reinforcement learning, applied to the traditional gridworld environment, having the agent learn its way to the goal while avoiding traps placed randomly throughout the grid.
Tic Tac Toe is an implementation of the classic game, with both the traditional player-vs-player version and a player-vs-computer version. The PvP version simply alternates between X and O, ending the game when the board is full or there is a winner. The exciting version is obviously the PvC, which always either wins or draws the match, never losing to a simpleton human player. It achieves this by predicting the human player's next move and blocking or one-upping that move.
Python - used to code the whole game, chosen for ease of use
Predicting the player's move - The computer models the player's next move as a probability distribution over the nine squares, corresponding to what it thinks are the probabilities of the player placing in each square. Each prediction begins with a uniform distribution with a probability score of 1 for every square. The computer then looks for any two squares in a line that are occupied by the player and adds 1 to the probability score of the third square in those lines, indicating that the player is most likely to place in that third square to complete the line. Then, it checks for any two squares in a line separated by a gap and adds 0.5 to the probability score of that gap square, indicating that the player is likely going to place in the gap square, but with not as much likelihood as the previous check. The squares that are already filled are then given a probability score of 0, preventing them from being considered for the next move. The probability distribution is then normalized, and the computer now has a heatmap of where the player is likely to place their next move.
Choosing the next best move - The computer models its next move as a probability distribution just like its prediction of the player's next move, again starting with a uniform distribution.
It first tries to get the center square if it is still empty, since it is a powerful square to control, setting its probability score to 2.
It then looks for any two squares that it controls in a line with an empty third square in their line, and adds 1 to the probability score of that third square, indicating that it wants to fill that third square and complete the line.
Then, it checks for any single square it controls that has two empty squares in a line in any direction, and adds 0.5 to the probability score of the neighboring squares in those directions, indicating that it would like to fill that neighboring square and setup a line it can complete.
The computer then normalizes its probability distribution, and is now left with two distributions, one for the human player's next move and one for its own next move, with the highest probability square in each one representing each player's optimal next move.
By adding those two distributions together and taking the argmax over the positions of the square, the computer determines where it should place next, which is the probabilistic combination of the square the human player is most likely to move to, and therefore the computer should block, and the optimal square for the computer regardless of the human player's move.
Balls is a physics simulation of balls subject to gravity and placed in an environment with obstacles that block their fall. The balls bounce and roll realistically, taking into account their different sizes, masses, and elasticities.
A future modification could be to model rolling more realistically with rotational mechanics, although the simulation is already convincingly realistic.
This was the final project for the Self Driving Cars Specialization on Coursera. The vehicle drives autonomously, maintaining a safe distance from the lead car and avoiding obstacles as it plans a path between two endpoints on a map in the Carla simulation environment.
The vehicle implements a multi-part motion planner, with a collision checker for path planning, a state machine for behavior planning, and dynamic motion models for velocity planning.
Inspired by Ant and Slime Simulations by Sebastian Lague, this simulation models a colony of ants finding an optimal path between randomly placed food sources, using pseudorandom exploration and guided by the pheromone trails laid by previous generations.
University of California, Berkeley • May 2023
I am a final year EECS student, particularly interested in robotics, computer vision, and machine learning. I have been conducting research in robotics with AutoLab, and am excited to continue this research in a graduate program next year.
Some interesting classes I have taken include Object Oriented Analysis and Design, Data Structures and Algorithms, Intro Robotics, Signals and Systems, Intro AI, Physics, Discrete Math, and Probability and Random Processes.
My focus has been in surgical and industrial robotics, exploring how automation in these areas can complement our own abilities to perform tasks better than we could on our own.
I co-first authored the paper Digital Twin Framework for Telesurgery in the Presence of Varying Network QoS (link), which we presented at CASE 2022 in Mexico City.
I created robust control and feedback system to enable telesurgery over lossy low-bandwidth networks, as often seen in battlefields and in space, and integrated the SRI Taurus VR simulator with the dVRK surgical robot to implement closed-loop controls for FLS Peg Transfer, a basic training task for laparoscopic surgery.
I then planned and executed hundreds of autonomous and semi-autonomous trials to validate the performance of our digital twin.
I also worked on a shunt insertion project, helping write Automating Vascular Shunt Insertion with the dVRK Surgical Robot, which is under review for ICRA 2023.
I helped develop the perception pipeline and construct the learning-based visual servoing algorithm to reliably perform shunt insertion.
On the industrial side, I worked on a multi-object grasping project, and helped write Learning to Efficiently Plan Robust Frictional Multi-Object Grasps, which is also under review for ICRA 2023.
I worked on the perception and manipulation pipelines, using OpenCV and the UR5 motion libraries, respectively. I collected data to train our grasp prediction neural network, and ran trials to validate the performance of our method.
I am currently working on an extension to this project involving tableware such as cups, bowls, plates, and utensils, hoping to learn robust multi-object grasps to efficiently clear these objects from the workspace.
I designed, deployed, and validated performant algorithms and efficient data models to revamp the permissions framework.
This involved refactoring the data model to implement transitive closure for ancestry, and building an optimized algorithm to traverse the data model.
I organized the dual write and backfill of the new data model to migrate hundreds of active source-of-truth tables, and architected asynchronous sampling, execution, and logging systems to performance test new permission checks.
For the intern hackathon, I wrote Python and JavaScript linters to enforce codebase consistency and flag inefficient programming practices, and published these to the production codebase.
I developed an API for global GraphQL search across all data sources for our alert dashboard, ensuring this API was robust to schema alterations. I integrated this API with a custom React-based frontend featuring incremental predictive search, modeled after Spotlight Search on Mac.
I built user presence indicators for the alert dashboard, inspired by the indicators on Google Docs, and implemented database listener channels in the backend for real-time presence and status updates. These bubbles would show when another user opened the incident page, when they became inactive, and when they left.
This was implemented with a React-based web component frontend and a GraphQL backend.
I used quantum computing to develop advanced regression models that were faster and more accurate than their classical counterparts, leading to more useful predictions about future market conditions.
I also developed a quantum reinforcement learning agent that used the power of quantum computing to exponentially speed up the pathfinding step of the q-learning model, allowing for faster training times on datasets with many features.
The model took advantage of its fast training time to continually retrain itself at regular intervals, keeping up to date with trends in the latest data.
I revamped the existing WebCMA chatbot using BotEngine and natural language processing to create conversational interactions that allowed real estate agents to generate Competitive Market Analysis reports from the web platform with ease.
I added report generation and sharing features to the web dashboard, allowing agents to sync the reports generated through the chatbot to a more permanent setup to share with clients and store for later reference. This was built with a robust Amazon EC2 backend to manage syncing and sharing reports and to ensure reliable service.
I also migrated the existing chat bot to Amazon Alexa to enable agents to create and send CMA reports using only their voice, reducing report generation time by 80%.
I completed Motion Planning for Self Driving Cars (course 4 of 4 in the Self Driving Car Specialization) on Coursera with a final grade of 93%.
The course covers static occupancy grids, trajectory rollout for static collision checks, high-level mission planning using A* search on road network graphs, mid-level behavior planning using finite state machines, and low-level local planning using dynamic vehicle models and optimizers to generate smooth and efficient paths.
The final project involves developing a state machine behavior planner that guides the car through a stop sign, a local planner that drives the car safely and efficiently by generating potential paths, checking for collisions, and selecting the optimal path, and a velocity profile generator that incorporates the vehicle's dynamic motion models to convert the path into a series of [x, y, theta, vel] waypoints to pass to the vehicle controller.
All my code from exercises and assignments can be found here.
My certification ID is CS6EE9DY2ACW, which you can verify here.
I completed Visual Perception for Self Driving Cars (course 3 of 4 in the Self Driving Car Specialization) on Coursera with a final grade of 91%.
The course covers intrinsic and extrinsic camera calibration, feature detection and matching, object tracking, visual odometry, feedforward neural networks, object detection, and semantic segmentation.
The final project involves using a depth map and semantic segmentation combined with camera input to estimate drivable space in 3D, identify lane lines, and perform collision detection with other cars on the road.
All my code from exercises and assignments can be found here.
My certification ID is 6EWJDHD2HHAH, which you can verify here.
I completed State Estimation and Localization for Self Driving Cars (course 2 of 4 in the Self Driving Car Specialization) on Coursera with a final grade of 100%.
The course covers standard and recursive least squares estimation, linear and nonlinear Kalman filters, pose estimation from GPS and IMU data, and localization and motion tracking through LiDAR point clouds.
The final project involves combining GPS, IMU, and LiDAR data through an error-state extended Kalman filter to estimate the pose of the vehicle in real time, accounting for sensor inaccuracies and drop outs.
All my code from exercises and assignments can be found here.
My certification ID is 7ZHRPNTRXKBZ, which you can verify here.
I completed Introduction to Self Driving Cars (course 1 of 4 in the Self Driving Car Specialization) on Coursera with a final grade of 98%.
The course covers the requirements and challenges for designing self driving cars, common hardware and software architectures, safety concerns, the dynamic vehicle model, and lateral and longitudinal control.
The final project involves designing lateral and longitudinal PID controllers capable of tracking a specified velocity profile to autonomously drive the car around a virtual racetrack.
All my code from exercises and assignments can be found here.
My certification ID is 255MSF793TNG, which you can verify here.
These are the programming languages and platforms I am most familiar with and have the most experience in. However, I am quick and eager to learn new things, and enjoy going beyond my comfort zone to understand a concept or figure something out.
In addition, I have developed projects in which I've done significant work with computer vision, sensor fusion, 3D data processing, supervised, unsupervised, and reinforcement learning, deep learning, physics simulations, data analytics, and data visualization.