conda info --envs
conda create -n RoboND python=3.5
source activate RoboND
jupyter notebook
python drive_rover.py
- Goal is to get Rover driving autonomously and find and pickup rocks
- Set up mini-conda environment for python code
- Complete all Training in Jupyter Notebook
- Record Video
- Install Unity Engine Simulator
- Drive Rover with python via Unity Engine
- Map at least 40% of the environment with 60% fidelity
- Find at least 1 rock
- Download the simulator and take data in "Training Mode"
- Test out the functions in the Jupyter Notebook provided
- Add functions to detect obstacles and samples of interest (golden rocks)
- Fill in the
process_image()
function with the appropriate image processing steps (perspective transform, color threshold etc.) to get from raw images to a map. Theoutput_image
you create in this step should demonstrate that your mapping pipeline works. - Use
moviepy
to process the images in your saved dataset with theprocess_image()
function. Include the video you produce as part of your submission.
- Fill in the
perception_step()
function within theperception.py
script with the appropriate image processing functions to create a map and updateRover()
data (similar to what you did withprocess_image()
in the notebook). - Fill in the
decision_step()
function within thedecision.py
script with conditional statements that take into consideration the outputs of theperception_step()
in deciding how to issue throttle, brake and steering commands. - Iterate on your perception and decision function until your rover does a reasonable (need to define metric) job of navigating and mapping.
Rubric Points
Here I will consider the rubric points individually and describe how I addressed each point in my implementation.
1. Run the functions provided in the notebook on test images (first with the test data provided, next on data you have recorded). Add/modify functions to allow for color selection of obstacles and rock samples.
1. Populate the process_image()
function with the appropriate analysis steps to map pixels identifying navigable terrain, obstacles and rock samples into a worldmap. Run process_image()
on your test data using the moviepy
functions provided to create video output of your result.
And another!
1. Fill in the perception_step()
(at the bottom of the perception.py
script) and decision_step()
(in decision.py
) functions in the autonomous mapping scripts and an explanation is provided in the writeup of how and why these functions were modified as they were.
- Defined source and destination points for perspective transform
- Applied perspective transform and mask on obstacles
- Applied color threshhold using range of numbers and created obstacle map from the masked perspective transform
- Updated image on left side of screen to see what is range of view
- Convert map image to rover centric coordinates
- Convert rover centric values to world coordinates
- Updated map image on right side of screen
- Convert rover-centric pixel positions to polar coordinates, update angles and distances
Screen Resolution: 1024x768
Graphics Quality: Good
FPS: 15
Rover is a basic wall follower with a bias of 13 and a side to side range of motion of -10/10. I gave the thresh a range so I could use the color thresh function for rocks too. Used a mask for the obstacles.
Rover still gets stuck in loop and on obstacles sometimes. It picks up rocks if they are right in front of it. Would like to figure out way to eliminate area that it's already travelled. Stops following left wall if the curve is too large or if the Rover is pointing the wrong way when it goes around some turns.