By Bernard Brafman, Sensory Inc., and Justin Moon, QNX Software Systems
In-vehicle infotainment systems are becoming more and more complex, and integral to the overall driving experience. As this trend continues, it will become increasingly important to create systems that support multiple forms of user interaction. If you’re driving, the last thing you want to do is enter a destination manually, or search for your favorite artist in a playlist by using a touch screen, jog wheel, or other manual input method. Drivers want and require a user experience that is both simple and natural; integration of speech recognition technology goes a long way toward achieving that goal.
In fact, speech recognition is a key component of the latest QNX technology concept car, a modified Bentley Continental GT. The speech rec system lets you plot a route or select your favorite artist using natural speech, but it goes even further by letting you simply ask the car to perform an action. Leveraging Sensory’s FluentSoft SDK, more specifically the TrulyHandsfreeTM Voice Control technology, the QNX concept development team implemented keyword spotting techniques to interact with the vehicle.
So how does this work? Well, let’s say you’re in Vegas and need directions to the Wynn Casino. To engage the cloud-based Watson speech system, you simply say “Hello Bentley” — no need to push a button. You then complete the request by saying “Take me to the Wynn Casino.” FluentSoft, along with the architecture of the advanced speech recognition system included in the QNX CAR platform, allowed the team to create this seamless, easily implemented, and well-executed voice interaction experience.
When you say “Hello Bentley,” the QNX concept car displays a visual prompt at the top of the screen, indicating that the speech rec system is now listening for natural speech or directed commands.
Multiple triggers
The team plans to further utilize Sensory technology in future concept car releases. The current implementation has the single “Hello Bentley” trigger, which engages the speech system. But TrulyHandsfree Voice Control supports multiple active triggers as well as a robust recognition vocabulary to create a rich command-and-control user experience that doesn’t require prompts or pauses. Thus, it’s possible to create a hybrid system that is seamless and transparent to the user. For instance, “Hello Bentley air 68 degrees” or “Hello Bentley what time is it in Tokyo?” can both be executed flawlessly, regardless of which speech rec system is engaged.
A matter of choice
For an even more personalized experience, this technology can allow drivers to create their own custom trigger with a simple one-time enrollment process that verifies their identity as a voice password or identifies one of several previously enrolled. This creates a custom experience not only by letting you choose your own trigger phrases (come on now, who hasn’t named their car at some point?), but also by recalling individual preferences such as seat position, steering wheel position, and multimedia presets.
Look for these enhanced features in concept cars to come!
Bernard Brafman is vice president of business development for Sensory, Inc., responsible for strategic business partnerships. He received his MSEE from Stanford University. Contact Bernard at bbrafman@sensoryinc.com
Justin Moon is a global technical evangelist for the automotive business development team at QNX Software Systems.
In-vehicle infotainment systems are becoming more and more complex, and integral to the overall driving experience. As this trend continues, it will become increasingly important to create systems that support multiple forms of user interaction. If you’re driving, the last thing you want to do is enter a destination manually, or search for your favorite artist in a playlist by using a touch screen, jog wheel, or other manual input method. Drivers want and require a user experience that is both simple and natural; integration of speech recognition technology goes a long way toward achieving that goal.
In fact, speech recognition is a key component of the latest QNX technology concept car, a modified Bentley Continental GT. The speech rec system lets you plot a route or select your favorite artist using natural speech, but it goes even further by letting you simply ask the car to perform an action. Leveraging Sensory’s FluentSoft SDK, more specifically the TrulyHandsfreeTM Voice Control technology, the QNX concept development team implemented keyword spotting techniques to interact with the vehicle.
So how does this work? Well, let’s say you’re in Vegas and need directions to the Wynn Casino. To engage the cloud-based Watson speech system, you simply say “Hello Bentley” — no need to push a button. You then complete the request by saying “Take me to the Wynn Casino.” FluentSoft, along with the architecture of the advanced speech recognition system included in the QNX CAR platform, allowed the team to create this seamless, easily implemented, and well-executed voice interaction experience.
When you say “Hello Bentley,” the QNX concept car displays a visual prompt at the top of the screen, indicating that the speech rec system is now listening for natural speech or directed commands.
Multiple triggers
The team plans to further utilize Sensory technology in future concept car releases. The current implementation has the single “Hello Bentley” trigger, which engages the speech system. But TrulyHandsfree Voice Control supports multiple active triggers as well as a robust recognition vocabulary to create a rich command-and-control user experience that doesn’t require prompts or pauses. Thus, it’s possible to create a hybrid system that is seamless and transparent to the user. For instance, “Hello Bentley air 68 degrees” or “Hello Bentley what time is it in Tokyo?” can both be executed flawlessly, regardless of which speech rec system is engaged.
A matter of choice
For an even more personalized experience, this technology can allow drivers to create their own custom trigger with a simple one-time enrollment process that verifies their identity as a voice password or identifies one of several previously enrolled. This creates a custom experience not only by letting you choose your own trigger phrases (come on now, who hasn’t named their car at some point?), but also by recalling individual preferences such as seat position, steering wheel position, and multimedia presets.
Look for these enhanced features in concept cars to come!
Bernard Brafman is vice president of business development for Sensory, Inc., responsible for strategic business partnerships. He received his MSEE from Stanford University. Contact Bernard at bbrafman@sensoryinc.com
Justin Moon is a global technical evangelist for the automotive business development team at QNX Software Systems.