Table of Contents
Kiersten, Jacob, Joe, Damien
- Kiersten - Mechanical Engineering, Ctrls & Robotics (MC34) - Class of 2025
- Jacob - Electrical Engineering (EC27) - Class of 2007
- Joe - Mechanical Engineering, Ctrls & Robotics (MC34) - Class of 2025
- Damien - Mechanical Engineering (MC25) - Class of 2026
Our project goal was to develop a prototype of an a self-driving campus rideshare service exclusively for UCSD students that utilizes facial recognition as an extra element of safety and security for students. We aimed to develop ROS2 packages that would run in conjunction with the UCSD Robocar framework, programming our car to effectively perform controlled tasks while driving autonomously.
- Ride Request
- When launching this node, the user will be prompted to define 4 variables
first_name
last_name
pickup_location
dropoff_location
- This "ride-request" node will then publish these details to a topic to be accessed by additional nodes to determine the robot's subsequent actions
- When launching this node, the user will be prompted to define 4 variables
- Custom User Interfaces
- This package defines custom interfaces for the parameters entered by the user
- Stores the user input data under predefined values for our nodes to access and compare, i.e.
identiifed_face
andfirst_name
of ride request
- Face Recognition
- Facial recognition and verification nodes that will be subscribed to the "Name" message given by the user and publish to a new topic
- Upon arriving at the pickup point, this module will deploy facial recognition using open-source Python libraries (
face_recognition, cv2, dlib
) - The service will initiate a live webcam stream through a mounted Oak-D Lite and attempt to identify the student
- If the student's identity is correctly verified as the individual who requested the ride, the navigation to dropoff will be authorized
- If the identified student does not match the name given in the ride request, the car will cancel pickup and return to base
- Upon arriving at the pickup point, this module will deploy facial recognition using open-source Python libraries (
- Facial recognition and verification nodes that will be subscribed to the "Name" message given by the user and publish to a new topic
- GPS navigation
- A package dedicated to extracting the pickup and dropoff locations, which will be converted to their corresponding
.csv
path datasets and used in mapping the route and navigating the path- Subscribes to the
pickup
anddropoff
location topics and matches the input to a saved path such asebu2-to-ebu1.csv
- Client/Action server node structure so the driving process happens one time as a service, unlike the publisher nodes
- Subscribes to the
- A package dedicated to extracting the pickup and dropoff locations, which will be converted to their corresponding
- LiDAR
- A package for utilizing mounted LiDAR LD06 for object detection as a safety measurement for collision avoidance
- This should launch as a submodule as part of the overall Robocar package that runs in the background for emergency stop capabilities
- A package for utilizing mounted LiDAR LD06 for object detection as a safety measurement for collision avoidance
ride_request_publisher.py
: ride request nodeuser_input_interfaces
: custom interface definitionsface_rec_pkg
: face recognition packageface_publisher.py
: face recognition node for publishing identified name and video streamverification_service.py
: identity verification node- GPS Navigation Training: DonkeyCar framework
See README
section in our src
directory for breakdown of how our packages run together
See README
section in our docker
directory for breakdown of how to run the Docker container for our program with all dependencies built into the image
- Complete package integration with ROS
- We successfully trained our car in several different paths using GPS PointOneNav in DonkeyCar and storing the paths as
.csv
files - Unfortunately we didn't have enough time to ROS-ify the Donkey GPS framework to run them from within our ROS/Robocar modules
- We successfully trained our car in several different paths using GPS PointOneNav in DonkeyCar and storing the paths as
- LiDAR
- If our car is driving autonomously with GPS only, we would definitely activate the LiDAR to incorporate an emergency stop
- Object detection for collision avoidance on while driving on the pretrained GPS paths
Part | CAD Model | Designer |
---|---|---|
Front Camera and LiDAR Mount | Kiersten | |
Side Camera and GNSS Puck Mount | Kiersten | |
Acrylic Base | Damien | |
Side Paneling | Damien |
Part | CAD Model | Source |
---|---|---|
Jetson Nano Case | Thingiverse | |
Oak-D Lite Case | Thingiverse |
Below is a circuit diagram of the electronic hardware setup for the car.
To program the Jetson Nano, we accessed the Jetson Nano through remote SSH connection to an embedded Linux system onboard and ran a docker container with all the necessary dependencies to run our packages. This allowed us to eliminate any incompatibility issues and to maximize resource efficiency on the Jetson. We used a variation of virtualization softwares including VMWare and WSL2 to build, test and launch our programs.
The base image pulled from Docker Hub for our project development contained the UCSD Robocar module ran on a Linux OS (Ubuntu 20.04). The Robocar module, consisting of several submodules using ROS/ROS2, was originally developed by Dominic Nightingale, a UC San Diego graduate student. His framework was built for use with a wide variety of sensors and actuation methods on scale autonomous vehicles, providing the ability to easily control a car-like robot while enabling the robot to simultaneously perform autonomous tasks.
For our early quarter course deliverables we used DonkeyCar to train a car in driving autonomous laps around a track in a simulated environment. We used Deep Learning to record visual data of driving on a simulated track and trained the car with the data to then race on a remote server. This helped us to prepare for training our physical car on an outdoor track with computer vision.
Thank you to my teammates, Professor Jack Silberman, and our incredible TA Arjun Naageshwaran for an amazing Winter 2024 class!
- Kiersten | [email protected]
- Jacob | [email protected]
- Joe | [email protected]
- Damien | [email protected]