The smart home may soon resemble something like Hal in 2001: A Space Odyssey thanks to Amazon’s latest addition to their Alexa: AI.
In an announcement last month, the retail giant gave a glimpse into the future of AI with a new large language model (LLM) and a suite of conversational AI capabilities designed to transform Alexa into a more intuitive device.
Amazon first released Alexa in 2014 and ushered in the first piece of smart kit that people would be exposed to. The device has become so ubiquitous that its sold half a billion devices and generates tens of millions of interactions every hour. It has released several new versions over the years, with updates ranging from hardware to software, yet the newest announcement represents the first time AI has been pegged to be implemented in.
This AI implementation comes amid a race by the tech titans to position themselves as the forerunners in this phenomenon they believe will be the future. Facebook has turned its focus from the Metaverse to AI, Google has created its own ChatGPT equivalent Bard, and chip makers like Arm and NVIDIA’s stock prices have soared over demand for the AI chips.
But is Amazon’s foray into AI just to be onboard the bandwagon? Or is there something substantial in the announcement. Here is what the have announced the new AI-enabled Alexa will be capable of.
Alexa will have a newly developed LLM meticulously designed for voice interactions. This new model promises to elevate Alexa’s capabilities, focusing on five foundational areas of enhancement.
Conversation: Amazon has fused input from Echo’s sensors—like cameras and voice recognition—with AI models capable of understanding these non-verbal signals. Moreover, latency has been minimized for uninterrupted conversations that align with voice communication standards. For instance, when seeking the latest news on a trending topic, Alexa delivers concise and relevant information, leaving room for further inquiries.
Real-world utility: the new Alexa LLM is poised to connect with hundreds of thousands of real-world devices and services through APIs, enabling it to process nuance and ambiguity, much like a human. For example, users can program complex routines entirely through voice commands, such as setting a nightly bedtime routine for kids, adjusting lights, turning on the porch light, and activating bedroom fans.
Personalization and context: Alexa will retains relevant context throughout conversations, allowing users to inquire without reiterating prior details.
Personality – hello, Hal!: the new LLM hopes to let the device develop a distinct personality, making interactions. So responses can now express opinions, celebrate achievements, or even draft enthusiastic notes or congratulations.
Trust: Amazon promises more trust and privacy, a concern that has always pervaded sceptics of Alexa, claiming to provide users with control, transparency, and peace of mind.
In addition to the new, some old elements are getting an update. Initating interactions with Alexa without using a wake word by enrolling in Visual ID, a new conversational speech recognition (CSR) engine adjusts to natural pauses and hesitations and generative AI enhances text-to-speech technology aims to allow Alexa to adapt its tone and responses to match the nuances of human conversation.
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.