The Donkey Car: Part 2 — Build, Calibrate, and Generate Training Data

Dan McCreary
5 min readJan 8, 2019

--

Here is my assembled Donkey Car. The chasis I had didn’t fit the 3D part I ordered, so I had to improvise with some plexiglass.

This is part 2 in a 3-part series on the Donkey Car. Here is part 1 and part-3. In part 1, I talked about how I got my new Donkey Car based on a Raspberry Pi up and running and got the camera connection working. I tested the camera with the RPi Cam Web Interface and drove around the first floor of our house to get the feel for the car and how it navigates.

Sample Image from the RPi Cam Web Interface

In this mode the Pi was just a portable camera behind a web server transmitting a video image to its web page. The RC car was totally controlled by the 2.4 Ghz controller that came with the car. To run the RPi Cam Web Interface software I just open a terminal on the Pi and downloaded the code from the github site. Then I ran the startup.sh script which started the web server.

I was interested in how much a delay there was between when the image was captured by the camera and when it appeared in front of me on the web page. The delay was negligible which allowed me to drive the car just by watching the image on the web page. This meant that the Input/Output between the camera and the Pi was fast as was the conversion of the image through the WiFi chip and to web browser. It basically proved there was enough horsepower in the Pi to do real-time remote video driving.

I then disconnected the connectors from 2.4GHz receiver that came with the RC car and moved the connections to the servo controller that I ordered from Amazon. Although this servo controller board is designed to control up to 16 servos, we only need to use two of the servos. Once for speed and one for turning the car. I also had to connect four wires from the servo controller to the Pi 40 Pin GPIO bus. A photo of these connects is below:

Four wires are used to communicate between the Pi and the servo controller. The black is ground, red is the +5v and the yellow and orange wires are the SCL (Clock) and SDA (Data). They are clearly labeled on the servo controller and you can see the pinouts on the Pi GPIO interface.

The next step is the calibration of the steering and accelerometer. To do this I had to SSH into the Pi and run the calibration. This process is a bit tricky since many of the Electronic Speed Controllers (ESC) are a bit different. It is documented on the Donkey Car site here. I still can’t seem to get the car to go into reverse since my “STOP” frequency is not correct. The net result is we have a configuration file that encodes the parameters of the throttle and the steering of your car.

Once this was done I was ready to drive around a test track. Much to my wife’s chagrin, I moved the furniture in the basement onto one side of the room and put down some white electrical tape on the basement floor. We had a cool epoxy coating put on the floor but the white tape had a good contrast.

Sample Training Track for my Donkey Car

You can also see lots of reflections of the lights on the floor. Our training process must learn to ignore the light reflections and only play “attention” to the white tape on the floor. Attention is an important concept in deep learning.

I then removed the Pi from the wall plug and powered it up using the new 6800 mAH power pack that I purchased on Amazon. I used some tape to secure the power pack under the platform. I should note that the GND and VCC wires from the ESC do provide power to the digital circuitry in the 2.4GHz receiver used in the RC car. However, this current is not enough to power the Pi. As a test I hooked up a USP current meter to the car while the Pi was running. The results are in the photo below:

Image of a USB current meter showing the Pi drawing about 300mA when the photo was taken. In practice it ranged from 300 to 500mA — much more current than the ESP was designed to deliver.

Generating the Training Data

Once we have our car all assembled we are ready to generate a training data set. I then did an SSH into the Pi and started up the drive program:

$ python manage.py drive

This is a python program that starts a web server showing you what is on the camera and it also gives you some controls to capture a training set. Once the drive program was going you can go to any web browser and type in the IP address of your car with port 8887. Now for the hard part. I had to drive around the track 10 times to build a training set!

The problem is that although I could control the car with the keyboard keys, it was very difficult to steer. I also tried the web interface “pointer” but that was also difficult to steer. Finally I got out my phone and pulled up the web page for the Donkey Car in the phone browser. The web browser is smart enough to detect the forward and sideways tilt of the phone and it translates this into speed and turning. Very clever! With about an hour of practice I could get around the course. I then just pressed the “Start Recording” and after about 10 laps I pressed “Stop Recording”. After this was done I could SSH into the Donkey Car and change directory into the “tub” folder. In that folder there were about 30K .jpg and .json files. Each of the JSON files had a reference to the image as well as a timestamp and the acceleration and steering as floating point numbers. This is our training data.

Here is a sample of the JSON file:

{
“user/angle”: 0.18989955357142868,
“user/throttle”: 0.7175781250000001,
“user/mode”: “user”,
“cam/image_array”: “1000_cam-image_array_.jpg”,
“timestamp”: “2019–01–05 17:09:35.184483”
}

Here is the image that corresponds to that JSON file:

Sample 160X120 pixel image for training the Donkey Car

I then copied the images from the Donkey Car to my laptop for training. I will cover that in the part 3.

--

--

Dan McCreary

Distinguished Engineer that loves knowledge graphs, AI, and Systems Thinking. Fan of STEM, microcontrollers, robotics, PKGs, and the AI Racing League.