It’s pouring rain in Las Vegas. Traffic is heavy on all six lanes, drivers are edgy.
It’s no wonder that Werner Preuschoff is entirely focused on steering his new Mercedes A-Class across “the Strip” in one piece.
He doesn’t have a free hand to plug in any navigation details or change any settings, let alone for e-mails or for managing his schedule for the next few days. Once upon a time, Preuschoff would have had to pull over or wait until the end of his drive to handle these issues.
But the Mercedes engineer isn't impeded by traffic today, instead he adjusts the temperature, finds restaurant recommendations and organizes his next day in the office – and all without taking his hands from the steering wheel or his eyes off the road.
It’s as if he’s speaking to his secretary, but he’s actually dictating to his car.
And he doesn't need to learn any new commands, as is the case with Apple’s Siri or Amazon’s Alexa. “Drivers can start a dialogue with the system just by saying ‘Hey Mercedes’,” says the project leader of the new Mercedes-Benz User Experience (MBUX).
The expanded MBUX infotainment system, including a large touch screen and even more lavish graphics, will be launched this summer with the Mercedes A-Class. And it should help put an end to drivers’ helpless stammering and the system’s occasionally incomprehensible reactions.
The voice assistant Casey, introduced by the supplier Bosch, uses a similar principle. It recognises commands in 30 different languages and is trained to understand natural speech patterns.
And unlike Mercedes’ system, Bosch’s Casey can be renamed, meaning that drivers can finally name their own car, says one Bosch expert at the recent CES tech trade fair in Las Vegas.
The system’s brainpower is located in the so-called Head Unit, and the analysis takes places in the car. And if an internet connection is available on top of this, these services can provide up-to-date information on the weather or the menus of select restaurants. Similar to Mercedes, Bosch’s system makes use of artificial intelligence to observe the driver and his habits and learn lessons for future scenarios.
Improvements in voice commands were one of the biggest trends at the CES this year. And it’s not difficult to see why: vehicles are becoming more and more complex.
The range of infotainment options and functions are nearly endless. And because the autonomous car, along with electric motors, is a major trend, people will have more and more time to occupy themselves in the car.
“That means the user interface needs to be fascinating as well as functional,” says Carsten Breitfeld, the head of the Chinese startup Byton, which plans to release an off-road vehicle in 2019 with a cockpit more impressive than its electric engine.
That’s because its entire dashboard consists of only one screen controlled by touch operations, gesture commands and – of course – voice commands with Amazon’s Alexa.
These systems clearly reduce the complexity of these operations, and in cars like Byton’s there’s no more than a handful of buttons. But that’s not enough for the Japanese manufacturer Nissan, which is why it wants to literally read the driver’s thoughts by tapping into their brain waves with so-called “brain-to-vehicle” technology.
This technology analyses autonomous system and lets them flow into interactions with the vehicle. As a result, the system should be able to predict movements for braking and steering and thus be able to implement them more quickly.
The A-Class isn’t as far along as this, however. Preuschoff is still circling around the Las Vegas trade fair in his prototype. He probably spoke to his car today more than he did with his family. It’s no wonder his mouth is getting dry — he could probably use a beer.
“Hey Mercedes, take me to the Hofbrauhaus,” are his last words in his conversation with the future. He’ll probably leave it at an non-alcoholic beer. Because even the A-Class isn’t an electric car – and it doesn’t have an autopilot. — dpa
Did you find this article insightful?