By: David Tian
Date: 2019-03-25
- Raspberry Pi 3 Model B+(1.4Ghz, 64 bit Quad core, with Wifi and Bluetooth) $30 Link
- Pi Case and power supply, $20 (Included in above package)
- Raspberry Pi Camera Module V2 (5Mp pixel CSI camera), $25 Link
- 64GB MicroSD card. Better to have 64GB or larger as we will be dealing with a lot of videos and large deep learning model files.
- Download NOOBS and unzip onto a formatted micro SD card.
- Insert SD card into Pi
- Connect HDMI/mouse/keyboard/2.5A power adapter to Pi board. Make sure to use the 2.5A power adapter, otherwise, it may not boot up, and only show a rainbow colored splash screen with an error stating insufficient power.
- Pi should boot up.
- Select Raspian OS Full Version to install at boot up prompt. It will take a few minutes to install and use up about 4GB of space. After installation, you should see a full GUI desktop. <TODO: image here>
- First time it boot
- OS will ask you to change password for use "pi". Change it to "rasp"
- OS needs to upgrade to latest software. this may take 10-20 minutes
Setting up remote access to Pi allows Pi computer to run headless, hence saving us from a monitor and keyboard/mouse. This video gives a very good tutorial on how to set up SSH and Remote Desktop.
- Get ip address of the Pi
- Run
ifconfig
. Find IP address to be192.168.1.120
<TODO: image here>
- Run
- Setup SSH and VNC
- Enable SSH Server:
- Run
sudo raspi-config
in terminal to start the "Raspberry Pi Software Configuration Tool". <TODO: image here> - For Ras-Config Rev 1.3, Choose
5. Interface Options
->SSH
->Enable
- Connect from Windows via Putty to IP address (192.168.1.120) from 1st step. Need to type in username/password (pi/rasp)
- Setup VNC
- For Ras-Config Rev 1.3, Choose
5. Interface Options
->VNC
->Enable
- Download RealVNC Viewer from here
- Connect to Pi's IP address using Real VNC Viewer. You will see the same desktop as the one Pi is running.
- For Ras-Config Rev 1.3, Choose
- Setup Remote Desktop
- SSH/VNC needs to be enabled from last 2 steps
- Run
sudo apt-get install xrdp
to install the Remote desktop Server on Pi - Run Remote Desktop from Windows to connect to Pi's address. You will see a different instance of Pi's desktop. (So probably better to use VNC, if you want to remote control Pi from your computer)
- Setup Remote File Access, so we can edit the files on Pi from our own computer. https://www.juanmtech.com/samba-file-sharing-raspberry-pi/
sudo apt-get update && sudo apt-get upgrade -y
sudo apt-get install samba samba-common-bin
sudo nano /etc/samba/smb.conf
# Delete all lines and Paste the following into the file:
[global]
netbios name = Pi
server string = The PiCar File System
workgroup = WORKGROUP
[HOMEPI]
path = /home/pi
comment = No comment
browsable = yes
writable = Yes
create mask = 0777
directory mask = 0777
public = no
# create samba password for user pi. go ahead and use the same password: rasp
sudo smbpasswd -a pi
New SMB password:
Retype new SMB password:
Added user pi.
# restart samba server
sudo service smbd restart
Connect from Windows: Wait for 30-60 sec and refresh Network, you should see
\Raspberrypi\homepi
troubleshoot sudo service --status-all |grep samba [ + ] samba [ - ] samba-ad-dc Both service should be [+]
- Remote Debugging from PyCharm
-
Plug in the USB Camera
-
Install USB Video Viewer
sudo apt-get install cheese
- Launch Video Player. You should see Live Videos feeds from the USB camera
cheese &
- Power Pi down by typing this command in a terminal:
sudo shutdown -h now
(-h
for shutdown,-r
for reboot) - Connect Pi Camera and enable settings following this video.
- Run
sudo raspi-config
- For Ras-Config Rev 1.3, Choose "5. Interface Options" -> "P1 Camera"
- Run
- Capture Still Image via command line
- Run this command:
raspistill -o ~/Desktop/mystill.jpg
. This will bring up the preview video screen, then take a photo and save mystill.jpg on your desktop. - Double click to open the image to see. <TODO: image here>
--vflip
option may be useful to flip the image from the camera upside down.
- Run this command:
- Capture Video via command line
- Capture Still Image via Python by following this video
- Install picamera python module:
sudo apt-get install python-picamera
- Initialize picamera:
import picamera import time camera = picamera.PiCamera()
- Capture a still image:
camera.capture('mystill_py.jpg')
- Capture a 5 sec video:
camera.start_recording('myvideo_py.h264') time.sleep(5) # this is in seconds camera.stop_recording()
- Install picamera python module:
TensorFlow is Google's Deep Learning Framework. OpenCV is an open sourced python computer vision package. We will use these two to do object detection (stop signs/traffic lights) and lane detection from video feeds. Edje Electronics’s video and his github page give an good overview on how to install TensorFlow and how to use it to do object detection from video.
- Install TensorFlow: (DO NOT use installation from the video above.) As of March 2019, the most recent version of TensorFlow is version 1.13 (2.0 is still alpha)
sudo apt-get install libatlas-base-dev pip3 install tensorflow
- Test TensorFlow Install, by running the "Hello World" program in python3. (Note you MUST run
python3
and NOTpython
, which is python 2.
import tensorflow as tf
hello = tf.constant('Hello TensorFlow!')
sess = tf.Session()
print(sess.run(hello))
If you see the following output, then TensorFlow is installed properly.
Hello TensorFlow!
Got this warning message. Is this ok? <TODO>
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5
return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412
```
There are other TF example code that can be found here.
```
git clone https://github.com/tensorflow/tensorflow.git
```
-
Install OpenCV and its dependent packages
sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev sudo apt-get install libxvidcore-dev libx264-dev sudo apt-get install qt4-dev-tools pip3 install opencv-python
-
Install Protobuf, which is needed by TensorFlow's object detection API. It has to be built from source, as no pre-build binaries from github for Pi OS/CPU combination. The following commands are from Edje Electronics’s github page
# First, get the packages needed to compile Protobuf from source sudo apt-get install autoconf automake libtool curl # Then download the protobuf release from its GitHub repository by issuing. At the time of writing v3.7.0 is the latest version. PROTOC_VER=3.7.0 PROTOC_ZIP=protobuf-all-$PROTOC_VER.tar.gz curl -OL https://github.com/google/protobuf/releases/download/v$PROTOC_VER/$PROTOC_ZIP tar -zxvf $PROTOC_ZIP cd protobuf-$PROTOC_VER # Build the package (this takes about 60 min) make # Check the build (this takes about 2 hours, and may freeze up the Pi system. Just reboot the pi, and rerun the `make check` again to continue. ) make check # Move install to proper system directories sudo make install # Setup protobuf for python: (this takes about cd python export LD_LIBRARY_PATH=../src/.libs python3 setup.py build --cpp_implementation python3 setup.py test --cpp_implementation sudo python3 setup.py install --cpp_implementation export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=cpp export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION=3 sudo ldconfig # protoc, the protobuf compiler is compiled and installed. Try to run it. protoc
You should see the protoc's help page, as below.
pi@raspberrypi:~/Downloads/protobuf-3.7.0/python $ protoc Usage: protoc [OPTION] PROTO_FILES Parse PROTO_FILES and generate output based on the options given: -IPATH, --proto_path=PATH Specify the directory in which to search for imports. May be specified multiple times; directories will be searched in order. If not given, the current working directory is used. If not found in any of the these directories, the --descriptor_set_in descriptors will be checked for required proto file. --version Show version info and exit. -h, --help Show this text and exit. [omitted....] # Reboot the Pi. (It is needed for TensorFlow to fully work) sudo reboot now
[NOTE] For x86 linux system, then we can just download and unzip directly, as in this tutorial.
# Download binary file (not source) PROTOC_VER=3.7.0 PROTOC_ZIP=protoc-$PROTOC_VER-linux-x86_64.zip curl -OL https://github.com/google/protobuf/releases/download/v$PROTOC_VER/$PROTOC_ZIP unzip $PROTOC_ZIP -d protoc3 # Move protoc to /usr/local/bin/ sudo mv protoc3/bin/* /usr/local/bin/ # Move protoc3/include to /usr/local/include/ sudo mv protoc3/include/* /usr/local/include/
-
Setup TensorFlow Models. Sources are from TensorFlow github.
Note that models undermodels/official
are officially maintained by google.
And models undermodels/research
are NOT officially maintained by google, but maintained by individual researchers.# create a tensorflow model directory pi@raspberrypi:~ $ mkdir tensorflow1 pi@raspberrypi:~ $ cd tensorflow1/ # download model from github (this take about 30 minutes to download 1.5G of model) pi@raspberrypi:~/tensorflow1 $ git clone --recurse-submodules https://github.com/tensorflow/models.git Cloning into 'models'... remote: Enumerating objects: 6, done. remote: Counting objects: 100% (6/6), done. remote: Compressing objects: 100% (6/6), done. Receiving objects: 19% (4920/24902), 101.71 MiB | 854.00 KiB/s [omitted....] sudo nano ~/.bashrc # add this line to end of ~/.bashrc so that PYTHONAPTH environment variable includes TF model folders. export PYTHONPATH=$PYTHONPATH:/home/pi/tensorflow1/models/research:/home/pi/tensorflow1/research/slim # close and reopen Terminal to see the PYTHONPATH updated. pi@raspberrypi:~ $ echo $PYTHONPATH :/home/pi/tensorflow1/models/research:/home/pi/tensorflow1/research/slim
-
Download Tensorflow Pre Trained Object Detection Model Zoo. Because Pi's limited computing power, we need to choose a model that is fast to run but may not be the most accurate.
In the COCO-trained models table, lower number in Speed(ms) column means faster model, and higher number in COCO mAP column means more accurate. Accounting for both speed and accuracy, these are decent models to use for Pi:- ssd_mobilenet_v2_coco model seems the best compromise for Pi (time=27ms, accuracy=22).
- ssd_mobilenet_v1_fpn_coco (time=56ms, accuracy=32) would be another good candidate, if we want more accuracy, but it runs twice as slow as ssd_mobilenet_v2_coco
# We will download ssd_mobilenet_v2_coco model pi@raspberrypi:~ $ cd /home/pi/tensorflow1/models/research/object_detection/ pi@raspberrypi:~/tensorflow1/models/research/object_detection $ wget http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz --2019-03-26 13:29:05-- http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz [omitted...] 2019-03-26 13:30:27 (607 KB/s) - ‘ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz’ saved [51025348/51025348] # unzip the tar gz file pi@raspberrypi:~/tensorflow1/models/research/object_detection $ tar -xzvf ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz ssdlite_mobilenet_v2_coco_2018_05_09/checkpoint ssdlite_mobilenet_v2_coco_2018_05_09/model.ckpt.data-00000-of-00001 ssdlite_mobilenet_v2_coco_2018_05_09/model.ckpt.meta ssdlite_mobilenet_v2_coco_2018_05_09/model.ckpt.index ssdlite_mobilenet_v2_coco_2018_05_09/saved_model/saved_model.pb ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb ssdlite_mobilenet_v2_coco_2018_05_09/ ssdlite_mobilenet_v2_coco_2018_05_09/saved_model/variables/ ssdlite_mobilenet_v2_coco_2018_05_09/saved_model/ # see the unzipped proto files pi@raspberrypi:~/tensorflow1/models/research/object_detection $ ls protos/ssd* protos/ssd_anchor_generator.proto protos/ssd.proto # compile the proto files into python wrappers. need to do it from `research` folders. pi@raspberrypi:~/tensorflow1/models/research $ protoc object_detection/protos/*.proto --python_out=. # after compiling, we should see the python wrapper created from the proto files. pi@raspberrypi:~/tensorflow1/models/research $ ls object_detection/protos/ssd* object_detection/protos/ssd_anchor_generator_pb2.py object_detection/protos/ssd_anchor_generator.proto object_detection/protos/ssd_pb2.py object_detection/protos/ssd.proto
-
Now all the setup should be done!
We will continue to follow Edje Electronics’s video at 14:39 and his github page on TensorFlow/OpenCV Object Detection on Raspberry Pi
- Download his code,
Object_detection_picamera.py
.
# Download the object detection code from Edje's github page
pi@raspberrypi:~/tensorflow1/models/research/object_detection $ wget https://raw.githubusercontent.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi/master/Object_detection_picamera.py
# install other dependent packages needed by TF object detection API
apt-get install libfreetype6-dev
sudo pip3 install pillow jupyter matplotlib cython opencv-python
sudo pip3 install lxml
# there is some erros with lxml install, but doesn't seem to matter.
# Run the object detection python script.
# This will take 1-2 min to load
pi@raspberrypi:~/tensorflow1/models/research/object_detection $ python3 Object_detection_picamera.py
TensorFlow Processing Unit is Google's specialized hardware to optimized to run deep learning inferences
-
Plug the TPU into USB-C cable, and then plug the USB part into the pi board
-
Power on, and then ssh/vnc into the pi
-
Run the following installation step.
wget http://storage.googleapis.com/cloud-iot-edge-pretrained-models/edgetpu_api.tar.gz
tar xzf edgetpu_api.tar.gz
cd python-tflite-source
bash ./install.sh
# output of install.sh
[omitted...]
Using /home/pi/.local/lib/python3.5/site-packages
Finished processing dependencies for edgetpu==1.2.0
- reboot pi to complete the installation.
sudo reboot
- Test if the installation by trying to identify a parrot image.
pi@raspberrypi:~/python-tflite-source/edgetpu $ python3 demo/classify_image.py \> --model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
> --label test_data/inat_bird_labels.txt \
> --image test_data/parrot.jpg
If you see the following output, then your TPU is connected and working!!
W0329 23:17:14.486328 814 package_registry.cc:65] Minimum runtime version required by package (5) is lower than expected (10).
---------------------------
Ara macao (Scarlet Macaw)
Score : 0.61328125
---------------------------
Platycercus elegans (Crimson Rosella)
Score : 0.15234375
We will be following the classify_capture.py demo (last demo) in Edge TPU's Demo guide
wget -P test_data https://storage.googleapis.com/cloud-iot-edge-pretrained-models/canned_models/mobilenet_v2_1.0_224_quant_edgetpu.tflite
wget -P test_data/ http://storage.googleapis.com/cloud-iot-edge-pretrained-models/canned_models/imagenet_labels.txt
python3 demo/classify_capture_usbcam.py --model test_data/mobilenet_v2_1.0_224_quant_edgetpu.tflite --label test_data/imagenet_labels.txt
python3 demo/object_detection_usbcam.py --model test_data/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite --label test_data/coco_labels.txt
- Edje Electronics’s video and his github page on TensorFlow/OpenCV Object Detection on Raspberry Pi
- OpenCV with Raspberry Pi Tutorial
- Hardware List
- RPi Remote Access
- Software packages installation (Python/OpenCV/numpy/etc)
- TF for CPU
- Object Detection with CPU
- TF for Edge TPU
- Object Detection with CPU
- Object Detection with a single Camera Interface (Pi/USB) a single Object Detection Interface (CPU/TPU)
- Lane Detection
- Stop Sign/Green Light/Red Light Detection (Transfer Learning) https://www.youtube.com/watch?v=Rgpfk6eYxJA
- Distance Sensing
- Steering within Lane
MobileNet-SSD-V1: fast to run, less accurate Faster-RCNN-Inception-V2: slow to run, more accurate FFmpeg split out images from videos
Transfer Learning by Chengwei Zhang 2/11/2019 (HELPFUL) https://medium.com/swlh/how-to-train-an-object-detection-model-easy-for-free-f388ff3663e https://colab.research.google.com/github/Tony607/object_detection_demo/blob/master/tensorflow_object_detection_training_colab.ipynb https://github.com/Tony607/object_detection_demo
Colab Helpful Scripts (Send email and show RAM and GPU memory) https://colab.research.google.com/drive/1P2AmVHPmDccstO0BiGu2uGAG0vTx444b#scrollTo=VgRfWu26wIBt
Convert model file (proto buffer format) to tflite (flat buffer) format https://www.tensorflow.org/lite/guide/get_started#2_convert_the_model_format https://www.tensorflow.org/lite/convert/cmdline_examples
EdgeTPU (from .tflite to EdgeTPU Model _edgetpu.tflite) https://coral.withgoogle.com/web-compiler/
F0404 10:45:46.228721 20807 usb_driver.cc:834] transfer on tag 1 failed. Abort. generic::unknown: USB transfer error 1 [LibUsbDataOutCallback] Backend terminated (returncode: -6) Fatal Python error: Aborted
Thread 0x76faa640 (most recent call first): File "/home/pi/python-tflite-source/edgetpu/swig/edgetpu_cpp_wrapper.py", line 110 in RunInference File "/home/pi/python-tflite-source/edgetpu/detection/engine.py", line 123 in DetectWithInputTensor File "/home/pi/python-tflite-source/edgetpu/detection/engine.py", line 93 in DetectWithImage File "/home/pi/python-tflite-source/edgetpu/demo/object_detection_usb.py", line 76 in main
mkdir D:\David\SelfDrivingCar\EdgeTPU\Docker cd D:\David\SelfDrivingCar\EdgeTPU\Docker
Download http://storage.googleapis.com/cloud-iot-edge-pretrained-models/docker/obj_det_docker to D:\David\SelfDrivingCar\EdgeTPU\Docker
D:\David\SelfDrivingCar\EdgeTPU\Docker>docker build - < obj_det_docker --tag detect-tutorial Sending build context to Docker daemon 4.608kB Step 1/13 : FROM tensorflow/tensorflow:1.12.0-rc2-devel 1.12.0-rc2-devel: Pulling from tensorflow/tensorflow 18d680d61657: Pull complete 0addb6fece63: Pull complete 78e58219b215: Pull complete eb6959a66df2: Pull complete b612f6150252: Downloading [======================> ] 122.2MB/265.8MB 3a3431d93e83: Download complete def5c38b0d33: Download complete 5838a959ea1d: Download complete 0d228310757c: Download complete 3e8ad7af9b28: Download complete 07710696a7aa: Download complete 8eda15e6480e: Downloading [=====================> ] 79.42MB/183.6MB 1204ced585ff: Download complete 31c3d3c34dab: Downloading [=================> ] 19.86MB/57.49MB 94f9c114a883: Waiting
docker run --name detect-tutorial --rm -it --privileged -p 6006:6006 --mount type=bind,src=/d/David/SelfDrivingCar/EdgeTPU/Docker,dst=/tensorflow/models/research/learn_pet detect-tutorial
/usr/local/lib/python3.5/dist-packages/SunFounder_PiCar-1.0.1-py3.5.egg $ p -r picar ~/picar3.5
/usr/local/lib/python3.5/dist-packages/SunFounder_PiCar-1.0.1-py3.5.egg $ sudo 2to3 -w picar RefactoringTool: Files that were modified: RefactoringTool: picar/PCF8591.py RefactoringTool: picar/init.py RefactoringTool: picar/back_wheels.py RefactoringTool: picar/filedb.py RefactoringTool: picar/front_wheels.py RefactoringTool: picar/SunFounder_PCA9685/PCA9685.py RefactoringTool: picar/SunFounder_PCA9685/Servo.py RefactoringTool: picar/SunFounder_TB6612/TB6612.py
pi@raspberrypi:~ $ cd /SunFounder_PiCar-V/ball_track/
pi@raspberrypi:/SunFounder_PiCar-V/ball_track $ cp ball_tracker.py ball_tracker3.py
remove the line import cv2.cv as cv
Change cv.CV_HOUGH_GRADIENT
to cv2.HOUGH_GRADIENT
after you are done, the diff should be the following.
pi@raspberrypi:~/SunFounder_PiCar-V/ball_track $ diff ball_tracker.py ball_tracker3.py 6d5 < import cv2.cv as cv
circles = cv2.HoughCircles(red_hue_image, cv2.HOUGH_GRADIENT, 1, 120, 100, 20, 10, 0);
pi@raspberrypi:~/SunFounder_PiCar-V/ball_track $ python3 ball_tracker3.py DEBUG "back_wheels.py": Set debug off DEBUG "TB6612.py": Set debug off DEBUG "TB6612.py": Set debug off DEBUG "PCA9685.py": Set debug off DEBUG "front_wheels.py": Set debug off DEBUG "front_wheels.py": Set wheel debug off DEBUG "Servo.py": Set debug off Traceback (most recent call last): File "ball_tracker3.py", line 68, in bw.speed = 0 File "/usr/local/lib/python3.5/dist-packages/SunFounder_PiCar-1.0.1-py3.5.egg/picar/back_wheels.py", line 91, in speed self.left_wheel.speed = self._speed File "/usr/local/lib/python3.5/dist-packages/SunFounder_PiCar-1.0.1-py3.5.egg/picar/SunFounder_TB6612/TB6612.py", line 62, in speed self._pwm(self._speed) File "/usr/local/lib/python3.5/dist-packages/SunFounder_PiCar-1.0.1-py3.5.egg/picar/back_wheels.py", line 46, in _set_a_pwm self.pwm.write(self.PWM_A, 0, pulse_wide) File "/usr/local/lib/python3.5/dist-packages/SunFounder_PiCar-1.0.1-py3.5.egg/picar/SunFounder_PCA9685/PCA9685.py", line 229, in write self._write_byte_data(self._LED0_OFF_L+4*channel, off & 0xFF) TypeError: unsupported operand type(s) for &: 'float' and 'int'
/usr/local/lib/python2.7/dist-packages/SunFounder_PiCar-1.0.1-py2.7.egg/picar
import picar picar.back_wheels.test() picar.front_wheels.test()
pi@raspberrypi:~/py3/SunFounder_PiCar-V/remote_control $ sudo ./start Server running Traceback (most recent call last): File "manage.py", line 8, in from django.core.management import execute_from_command_line ImportError: No module named 'django'
pip3 install Django
issue: def map(self, x, in_min, in_max, out_min, out_max): '''To map the value from arange to another''' return (x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min
Edje Electronics: Label data with Label Img Tool
- Download LabelImg from https://github.com/tzutalin/labelImg. Windows binaries is prebuilt. Mac and Linux can be build from source.
- Click Open Dir to point to training image folders
- Turn on Auto Save xml: View Menu -> Auto Saving
- Useful shortcuts: W to create rectangular box, D to go to next file, A to go to previous file
- Will see an xml along with each picture file. xml files specify the bounding box and type of object labeled by you