i3 | April 11, 2017

Self-Driving Artificial Intelligence

by 
Robert E. Calem
The interior of a self-driving car

Self-driving cars, and the technologies that make them possible, were a recurring motif at CES 2017.

Self-driving cars, and the technologies that make them possible, were a recurring motif at CES 2017. The show featured events both in- and outdoors, from the keynote address by NVIDIA Co-founder and CEO Jen-Hsun Huang to exhibits in the Las Vegas Convention Center as well as rides in self-driving vehicles stationed at the Self-Driving Tech Marketplace (within LVCC’s North Plaza).

NVIDIA started work on self-driving vehicles almost a decade ago, and now the technology is emerging to make a car “your most personal robot,” Huang declared, pointing to machine “deep learning” and “artificial intelligence” (AI) made possible by powerful graphics processing units (GPUs) that are effectively supercomputer chips.

“With deep learning we can now perceive the world, not just sense the world and also reason about where the car is, where everything else is around the car,” he said. Using AI in the car can also predict where everything around it will be in the near future. Huang added, the combination of deep learning and AI enable a self-driving car to decide on its own whether the path that it’s on, or a new path that it can take, would be safe to traverse. “We can use technology to teach a car how to drive just by watching us, observing us and learning from us,” Huang said.

He introduced Xavier, NVIDIA’s newest “AI car supercomputer,” composed of an eight-core 64-bit ARM central processing unit (CPU) and a 512-core Volta GPU that runs a new operating system named DriveWorks. “Incredibly powerful,” Xavier fuses data it receives from multiple sensors and controllers in and around the vehicle, as well as navigation information from highly detailed HD maps and updates from the cloud to let the car alone “perceive, mobilize, reason and drive.”

Of course, there will be situations that could cause even a fully-self-driving car to be confused, in which case a human would need to take back control, Huang conceded. So, “we believe the car should also be an AI copilot,” he added, and introduced a Xavier-based feature named AI Co-Pilot. It leverages cameras outside the vehicle and speakers and microphones inside the vehicle to constantly monitor the surrounding environment, the driver and the passengers – and to respond appropriately. For example, the AI Co-Pilot could read the driver’s lips and determine what he or she said as a command (when the environment in the cabin is too noisy for voice commands to work), or it could see that the driver’s gaze is looking away from a road hazard such as a bicyclist in the vehicle’s path (and verbally warn him or her). Tapping into the on-board navigation system, AI Co-Pilot could proactively open a driveway gate just-in-time because it knows minutes before that the vehicle is approaching it, Huang suggested.

Building this advanced car technology is “the most complex computing problem that we’ve ever tackled,” Huang said. “This is high-performance computing done in real-time, and the AI algorithms that we’re developing are all first of their kind.” Two of the largest tier-one suppliers to the automotive industry – Bosch and ZF – have teamed with NVIDIA to build Xavier AI car supercomputers, and ZF will have them available to automakers starting this year.

Audi will be the first automaker to produce a fully self-driving “AI car” integrating NVIDIA’s technology. Scott Keogh, Audi of America’s president, joined Huang to announce that it will launch by 2020. Meanwhile, at CES, Audi gave rides in a Q7 SUV demonstration vehicle that Keogh said was trained to navigate the closed course in only four days, using deep learning and AI.

Huang’s keynote set the scene for other companies to introduce and demonstrate their own self-driving car technologies at CES.

Nissan at CES 2017

Nissan Chairman and CEO Carlos Ghosn gave a keynote in which he introduced the automaker’s Seamless Autonomous Mobility (SAM) system. It, too, blends AI and deep learning technologies to teach a self-driving vehicle how to drive, but adds a human element: assistance from people, called Mobility Managers, who can remotely instruct the vehicle how to deal with a driving disruption (such as an obstacle in the road) on the spot – by sending the vehicle a path to follow in real-time. (The Mobility Manager can remotely view the vehicle’s camera and sensor data to assess the situation.) In turn, SAM communicates the guidance to other self-driving vehicles nearby so that they can learn to navigate past the obstacle without human guidance.

SAM is based on software developed by NASA to visualize, supervise and remotely guide interplanetary robots. Nissan envisions it helping other automakers’ self-driving vehicles, as well.

“Our goal is to change the transportation infrastructure,” said Maarten Sierhuis, former NASA scientist and director of the Nissan Research Center in Silicon Valley, who also spoke at the Nissan keynote. “We want to reduce fatalities and ease congestion. We need a huge number of vehicles out there. What we are doing at Nissan is finding a way so that we can have this future transportation system not in 20 years or more, but now.”

Ghosn also announced ProPILOT, a self-driving function that works on single-lane highways and will be built into the next-generation Nissan Leaf electric car, to be introduced soon. The automaker anticipates introducing self-driving driving on multi-lane highways – the ability to change lanes autonomously – by 2019, and self-driving city driving by 2020, added Senior Vice President of Research and Development Takao Asami, also at the Nissan keynote.

Also at CES 2017

Mercedes Benz also included a presentation from NVIDIA’s Huang at their exhibit space alongside Sajjad Khan, Mercedes Benz’ vice president of digital vehicle and mobility. Both spoke regarding the companies’ three-year collaboration on AI for cars, and Khan revealed that Mercedes will bring its first NVIDIA-powered AI vehicle to market within 12 months.

BMW staged rides in a prototype fully self-driving 5 Series sedan that incorporated new ways for passengers to interact with the vehicle including a next-generation gesture control system. Users can point at a nearby building (such as a theater) with a thumb gesture to see information such as details of an entertainer’s show that is retrieved from the cloud. In addition, BMW, Intel and Mobileye announced they will field a fleet of fully self-driving test vehicles in the second half of this year, incorporating an Intel computing platform and Mobileye’s computer vision processing technology. BMW is expected to introduce its fi rst fully self-driving vehicle, the iNEXT, in 2021.

Chrysler introduced Portal, a concept car built by and for “millennials” (people born between 1982 and 2001). Although engineered to be semi-self-driving under certain highway situations, it is upgradeable to be fully self-driving, thanks to a litany of sensors (LiDAR, radar, sonar and vision) that constantly monitor conditions both inside and outside the vehicle. Inside sensors use facial recognition and voice biometrics technologies.

Ford showed an updated fully self-driving prototype vehicle outfitted with a new array of sensors arranged differently. The new packaging takes Ford closer to its goal of bringing a fully self-driving vehicle to market by 2021, said Bryan Goodman, technical leader for AV machine learning at Ford. While last year’s AV prototype had four huge LiDAR sensors spread across the top of its roof, the vehicle shown at CES 2017 has just two tiny solid-state LiDAR units positioned on its A-pillar (the windshield and roof supports), providing better vision around objects in front of the vehicle, Goodman said. The new vehicle also has six stereo cameras integrated in roof rack rails (one on the front, side and rear of each rail), and six radars (short-to-medium range units in the four corners of the vehicle, and one long-range unit in the front and rear). Six Intel Core i7 CPUs and two NVIDIA GPUs process the sensor data and provide AI. So, the new vehicle can manage driving situations that last year’s prototype could not, Goodman said.

Toyota unveiled Concept-i, which uses AI and sensors inside and outside the vehicle to detect and monitor both a driver’s emotions and road conditions, so it can offer either automated driving support (augmented by visual or haptic stimuli) or fully-self-driving capabilities when necessary.

Valeo demonstrated how semi-self-driving driving capabilities and ADAS (advanced driver assistance systems) can be improved with new hardware, software and connectivity, augmenting technologies already offered in vehicles. Staged rides showed off four innovations the company debuted at CES: eCruise4U, a self-driving driving system for electric vehicles; XtraVue, a set of computervision cameras that can connect to and communicate with identical cameras in other vehicles to show drivers what they otherwise couldn’t see on the road ahead; 360AEB Nearshield, a self-driving braking system that alerts drivers to obstacles in the vehicle’s blind spots and automatically stops the vehicle to prevent impact if necessary; and C-Stream, a sensor module that replaces the rear-view mirror, maps the vehicle’s cabin, determines the number of people in the vehicle and monitors the driver’s alertness level.

In-Vehicle Experiences

When cars drive themselves, what will the people inside be doing? At CES, automakers and some leading technology suppliers introduced and demonstrated new in-vehicle user experiences and the tools to create them.

Chrysler unveiled the Portal Concept, an electric vehicle (EV) with semi-autonomous driving capabilities – so it is sometimes driven, and sometimes driving itself. But in either mode, facial and voice recognition technologies help keep everyone content, by determining who’s sitting in which seat and tailoring the ride with his or her preferred settings: playing favorite music in a personal music zone, adjusting the cabin zone temperature and heated or cooled seat, and lighting. When a person changes seats in the vehicle, these settings follow. Predictive intelligence allows passenger media preferences to be blended – for example, to create a music or video playlist that everyone would enjoy. And gaze-tracking places critical notifications on-screen where the driver will see them soonest, minimizing reaction time. Underpinning all of this is the Panasonic Cognitive Infotainment (PCI) platform, wireless connectivity and audio systems from Panasonic Automotive Advanced Engineering.

Honda debuted NeuV or New Electric Urban Vehicle, which integrates an AI assistant enabled by an “emotion engine.” This assistant detects the driver’s emotions behind his or her judgments and, based on the driver’s decision history, offers suggestions and recommendations. For example, it can suggest a music choice based on mood. NeuV is a component of Honda’s Cooperative Mobility Ecosystem, introduced at CES, along with DreamWorks Animation. That collaboration aims to develop new cross-platform augmented reality and virtual reality content and solutions for future vehicles. Honda provided examples at CES presenting Honda Dream Drive, a proof-of-concept in-car virtual reality prototype featuring exclusive Dreamworks Animation content.

Panasonic launched the Panasonic Cognitive Infotainment platform (PCI), which leverages IBM’s Watson cognitive technology (for deep natural language processing) and cloud connectivity to answer questions and provide recommendations and directions while the vehicle is driving to a destination. PCI also enables e-commerce transactions from within the vehicle. Panasonic demonstrated this at CES with a mock fast-food purchase showing attendees how an order can be verbally placed through the infotainment system, paid for from the car and timed for on-time pickup.

Valeo unveiled the Experience of Traveling Cockpit for driverless vehicles. It divides the user experience into three phases: the “task of driving” phase, when a human is in control, uses projected and moving light to draw the driver’s attention to hazards on the road or in a blind spot, and emits an energizing fragrance from the cabin vents to help wake a drowsy driver. Next, the “experience of traveling phase,” is when the vehicle is driving itself, offers gesturecontrolled reading lamps (the size of their beams adjusted with hand movements), an ionizer that releases a relaxing fragrance, and a cooling mist dispenser that maintains an optimal humidity level in the passenger cabin. Finally, the “back to drive” phase, is when the driver is required to take back control of the vehicle, uses flashing lights to guide his or her attention back to the steering wheel and releases an energizing fragrance from the vents.

Volkswagen demonstrated its vision for next generation user interfaces in an interactive experience at its booth. The Digital Cockpit (3D) positions two screens one behind the other, creating a three-dimensional effect that the automaker says makes navigating the displays easier to learn. Eye tracking monitors where the user is looking on the 3D display and calls up the desired control or information. And an AR (augmented reality) heads-up display (HUD) projects information in virtual form on two levels, which appear to be on the road itself, in front of the vehicle. Level 1 provides navigation route data or the distance to the vehicle ahead. Level 2 includes any other data, including infotainment or personal information.

With technology developments advancing so quickly, CES 2018 should contain even more surprise announcements.

Self-Driving Vehicles Working Group

CTA’s Self-Driving Vehicles Working Group (SDVWG) aims to drive the adoption of self-driving vehicles and driver-assist technologies across American roadways expediently and safely. SDVWG Chair Jessica Nigro, manager, Outreach and Innovation Policy at Daimler North America Corp., says, “Technology almost always moves faster than policy. But for society to realize the full benefits of automated driving systems (ADS), sound policy and consumer acceptance need to advance. CTA’s SDVWG companies are joining forces to remove roadblocks and make ADS commercialization a reality.”

March/April 2017 i3 Cover Issue

Subscribe to i3 Magazine

i3, the flagship magazine from the Consumer Technology Association (CTA)®, focuses on innovation in technology, policy and business as well as the entrepreneurs, industry leaders and startups that grow the consumer technology industry. Subscriptions to i3 are available free to qualified participants in the consumer electronics industry.