Software to operate the go-kart in autonomous and manual modes. The performance of the go-kart hardware and software are documented in reports.
The code in the repository operates a heavy and fast robot that may endanger living creatures. We follow best practices and coding standards to protect from avoidable errors - see development guidelines.
The student projects are supervised by Andrea Censi, Jacopo Tani, Alessandro Zanardi, and Jan Hakenberg. The gokart is operated at Innovation Park Dübendorf since December 2017.
- Noah Isaak, Richard von Moos (BT): micro autobox programming, low-level actuator logic
- Mario Gini (MT): simultaneous localization and mapping for event-based vision systems inspired by Weikersdorfer/Hoffmann/Conradt; reliable waypoint extraction and following
- Yannik Nager (MT): bayesian occupancy grid; trajectory planning
- Valentina Cavinato (SP): tracking of moving obstacles
- Marc Heim (MT): calibration of steering, motors, braking; torque vectoring; track reconnaissance; model predictive contouring control; synthesis of engine sound; drone video
- Michael von Büren (MT): simulation of gokart dynamics, neural network as model for MPC
- Joel Gächter (MT): sight-lines mapping, clothoid pursuit, planning with clothoids
- Antonia Mosberger (BT): power steering, anti-lock braking, lane keeping
- Maximilien Picquet (SP): Pacejka parameter estimation using an unscented Kalman filter
- Thomas Andrews (SP): Torque Controlled Steering extenstion to MPC.
tensor
for linear algebra with physical unitsowl
for motion planninglcm
Lightweight Communications and Marshalling for message interchange, logging, and playback. All messages are encoded using a single typeBinaryBlob
. The byte order of the binary data islittle-endian
since the encoding is native on most architectures.io.humble
for video generationjSerialComm
platform-independent serial port accessELKI
for DBSCANlwjgl
for joystick readout
Sensor interfaces
- interfaces to lidars Velodyne VLP-16, HDL-32E, Quanergy Mark8, HOKUYO URG-04LX-UG01
- interfaces to inertial measurement unit Variense VMU931
- interfaces to event based camera Davis240C with lossless compression by 4x
- interfaces to LabJack U3
- point cloud visualization and localization with lidar video
- 3D-point cloud visualization: see video
distance as 360[deg] panorama
intensity as 360[deg] panorama
- 3D-point cloud visualization: see video
our code builds upon the urg_library-1.2.0
Rolling shutter mode
Global shutter mode
2.5[ms] |
5[ms] |
Events only
1[ms] |
2.5[ms] |
5[ms] |
AEDAT 2.0, and AEDAT 3.1
- parsing and visualization
- conversion to text+png format as used by the Robotics and Perception Group at UZH
- loss-less compression of DVS events by the factor of 2
- compression of raw APS data by factor 8 (where the ADC values are reduced from 10 bit to 8 bit)
Quote from Luca/iniLabs:
- Two parameters that are intended to control framerate:
APS.Exposure
andAPS.FrameDelay
APS.RowSettle
is used to tell the ADC how many cycles to delay before reading a pixel value, and due to the ADC we're using, it takes at least three cycles for the value of the current pixel to be output by the ADC, so an absolute minimum value there is 3. Better 5-8, to allow the value to settle. Indeed changing this affects the framerate, as it directly changes how much time you spend reading a pixel, but anything lower than 3 gets you the wrong pixel, and usually under 5-6 gives you degraded image quality.
We observed that in global shutter mode, during signal image capture the stream of events is suppressed. Whereas, in rolling shutter mode the events are more evenly distributed.
- Simultaneous localization and mapping for event-based vision systems by David Weikersdorfer, Raoul Hoffmann, and Joerg Conradt
- Asynchronous event-based multikernel algorithm for high-speed visual features tracking by Xavier Lagorce et al.