The rapid advancement of autonomous vehicle technology is thanks to a confluence of factors, such as low-cost sensors (thanks smart phone revolution) and the ongoing price/power/computational performance improvements for microprocessor and graphic processing units. These tools are enabling the practical application of machine learning and creating the electronic brains necessary to make the seemingly infinite driving decisions that an experienced human driver often takes for granted.
The above video demonstrates self-driving technology deployed in an off-the-shelf, Toyota Prius. Filmed on the streets of Las Vegas during International CES2017, Aimotive demonstrates how their software is able to control the car by only integrating GPS data combine with six cameras to create a 360 degree view of the car’s surroundings at every point along the way.
The magic is how the software can create depth information from a single camera, as its ability to identify signs, objects, pedestrians and other vehicles and use this information to drive by simply seeing what people normally see. Niko Eiden, Aimotive’s Chief Operating Officer, explains that their approach allows for the fusion of multiple sensors (e.g. lidar, radar, ultrasonic, inertial GPS), which effectively provides redundancy and increased accuracy.
This approach means there are no requirements for real-time communications between the vehicle and the cloud, as Eiden points out,
“You get autonomous driving without having to change infrastructure.”
The relevant information gathered during the drive is uploaded to the Aimotive cloud, in non real-time (e.g. when parked). This information includes refined maps, which help the next car navigating the streets of Las Vegas. The Aimotive neural network uses the information to retrain the deep learning algorithms and then communicates the refinements back to the vehicles, helping to improve the vehicle’s self-driving skills.
As importantly, having this data in the cloud, allows the testing and simulation of different and rare environments (e.g. snow in Las Vegas). As Rand pointed out last year, driving alone won’t be sufficient to test all the possible use-cases. As engineers have done for years with electronic circuits, computer simulation will help find “bugs” faster and safer than real-world driving.
Eiden also discusses their generic artificial intelligence accelerator, which promises to reduce power consumption by 20x; which is critical as vehicles are increasingly moving towards an electric drive-train. The accelerator also allows more of the artificial intelligence to live at the edge, effectively making each vehicle smarter (e.g. better sensor fusion). This interview was filmed prior to their partner, Nvidia’s announcement, of its Xavier SoC and Xavier DLA, which seems to mirror the tools that Aimotive is providing to help improve processing efficiency.
From an OEM perspective, Eiden talks about Aimotive’s relationship with Volvo and its Drive Me program. Started in July 2015 in Hungary, Aimotive is a young company and, at the time of the filming of the above video, had not had a U.S. presence long enough for the car to legally drive without a hands-on human driver.
Stay tuned for a potential follow-up interview at its Silicon Valley site for hands-free driving demonstration.
Leave a Reply