Deep learning at the core of Audi piloted driving

(June 7, 2015) HERNDON, Va. — As Audi perfects its approach to piloted driving, its engineers are relying on an advancement that developers call “deep learning” to train computers to imitate the human brain. Progress in this form of machine learning was crucial for the piloted-driving run of “Jack,” the Audi A7 Sedan that transported a group of automotive journalists some 550 miles from Silicon Valley to the International Consumer Electronics Show in Las Vegas in January.

And deep learning is at the center of the fast evolution of piloted driving toward a commercially available vehicle that can get itself to any destination with little human help.

Working with key suppliers such as NVIDIA, the digital-tech company based in Santa Clara, California, we are creating an automobile-computer model that simulates the way the brain processes new information.

Think of the car’s way of learning as similar to a child’s. Caregivers teach a baby to identify things she perceives with her senses: a circle, a square, colors. Object edges are very important in this process. The edges form meaningful, distinct shapes, which the brain starts to recognize. A fire truck is red, has a certain shape and wheels, but at first, the baby might think all trucks are fire engines. Then the child learns to differentiate between different kinds of trucks.

That’s how the nexus of our piloted driving technology — the zFAS central driver-assistance controller – works. Pixels are generated by camera images, similar to how the human eyeball transfers images to the brain. The Audi processor, about the size of a tablet PC and powered by NVIDIA’s Tegra processor, analyzes every frame of video that comes in, and it senses edges which it groups into shapes. It learns that the shapes are objects, then learns to differentiate those objects.

This artificial intelligence enables the Audi processor to detect, for instance, features such as eyes, a nose and mouth, and it figures out that they all fit into a face. It also allows Audi vehicles to detect and identify other vehicles. All of this information goes into a database to foster future advances in such recognition. The system serves as one of the important bases of intelligence for piloted driving.

With every mile, the car gets smarter. But it takes more than terabytes of such data to make for successful autonomous driving. The data also must be processed very quickly: 30 video frames a second. The information must be transmitted, recognized, processed, analyzed – and provide a reaction – almost instantaneously, in case an Audi driver is encountering tricky conditions.

That’s why one of the most important objectives of deep learning is to ensure that every bit of object recognition is embedded in the processor in the Audi vehicle, not dependent on the internet cloud.

So while deep learning isn’t the only technology that we must perfect on our way to offering the ultimate in piloted driving, it is one of the most important aspects of how Audi is advancing on the future.