Tag Archives: self-driving car

Exploring Udacity’s 1st 40GB driving data set

I read about the release of their second data set yesterday and wanted to check it out.  For convenience, I downloaded the original, smaller, data set.

Preface: ROS is only officially supported on Ubuntu & Debian and is experimental on  OS X (Homebrew), Gentoo, and OpenEmbedded/Yocto.

Getting the data

Download the data yourself: Driving Data

The data set, which is linked to from the page above, was served up from Amazon S3 and actually seemed quite slow to download, so I let it run late last night and started exploring today.

The compressed download is dataset.bag.tar.gz

20161006-udacity-dataset-bag

and after extracting is a 42.3 GB file dataset.bag

.bag is a file type associated with the Robot Operating System (ROS)

Data overview

To get an overview of the file use the rosbag info <filename> command:

result

Open in new window

There are 28 data topics from on-board sensors including 3 color cameras.  Topics:

  • /center_camera/image_color
  • /left_camera/image_color
  • /right_camera/image_colors

Each camera topic has 15212 messages.   Doing the math on 15212 messages / 760 seconds works out to roughly 20 frames per second.

Viewing the video streams

Converting a camera topic to a standalone video is a two step process:

  1. export jpegs from the bag file
  2. convert the jpegs to video

Exporting the jpegs

To export the image topic to jpegs, the bag needs to be played back and the frames extracted.  This can be done with a launch script.  The default filename pattern (frame%04d.jpg) allows for 4 numerical figures, so we need to add the following line to modify the default file name pattern into one that allows for 5 digits:

The entire script below that launches the player and extractor:

The number of resulting frames should match the number of topic messages seen from info.

If not, as was our case, the default sec per frame time should be changed.  It seems counter-intuitive, but after slowing down the rate, trying “0.11” and “0.2”, the number of frames extracted was also going down.  I settled on “0.02” seconds per frame which resulted in the correct number of frames.  Add the line to the launch script.

The working launch script now looks like this:

Download working Left, Center, and Right jpeg export launch scripts on GitHub

The result should be the correct number of frames saved (frames starts at 00000) and the message “[rosbag-1] process has finished cleanly”

Hit Ctrl + C to exit

frame00000.jpg 640×480

20161006-udacity-self-driving-car-ds1-lc-frame00000
frame00000.jpg extracted from topic /left_camera/image_color

Convert the jpegs to video

Resources:

License: The data referenced in this post is available under the MIT license.  This post is available under CC BY 3.0

Where to next?

Udacity open sources 223GB of driving data

Following on the heels of another self-driving car developer, comma.ai releasing driving data, Udacity open sourced two data sets from their self-driving Lincoln MKZ.

Udacity’s data is over 70 minutes of driving spread over two days from Mountain View, Calif.   You can read more from the TechCrunch article or their Medium post.

The data is available under the MIT License.

We downloaded the first, smaller data set and started exploring the data.

We also have a page tracking available data sets.

License: this post is available under CC BY 3.0

Self Driving Cars in the News

Crashes:

Near misses:

Other incidents:

Notable news:

Ethical issues:

2015’s big leap into machine learning

A few decades ago, artificial intelligence was just an exciting topic among engineers and developers. In recent years, machine learning has emerged as the ideal outgrowth of big data, breathing new life into concepts such as artificial intelligence. This is how machine learning changed our lives this year and how we expect these advances to impact… Continue reading 2015’s big leap into machine learning

Daily Links Friday 12/18/15

Daily Links Friday 12/18/15