Today during an event focused on Alexa developer and partner news, Amazon announced new features and integrations, including a collection of APIs and software development kits (SDKs) aimed at making Alexa more versatile. They come a month after the company demoed forthcoming and experimental Alexa features at its annual re:Mars conference in Las Vegas, including the ability to synthesize short audio clips into longer speech.
A highlight of today’s event was the previewing of Amazon’s Universal Device Commands (UDC) and Agent Transfers (AT), a pair of technologies intended to simplify the task of interacting with multiple voice assistants on the same device. On devices that support Alexa and another voice assistant, like Sonos’ Sonos Voice Control, UDCs will let users say commands (e.g., “Turn up the volume”) using compatible wake words (e.g., “Hey, Sonos”), even if the target assistant wasn’t originally used to initiate the request. ATs, meanwhile, will allow voice assistants to transfer user requests (e.g., “Ask UberEats to place an order”) to other assistants when they don’t have the ability to fulfill them.
Amazon telegraphed its plans for UDC and AT two years ago in a whitepaper outlining design recommendations for members of the Voice Interoperability Initiative (VII), the company’s program to ensure voice-enabled products ship with multiple voice assistants (inclusive of Alexa, unsurprisingly). VII members, who include Dolby and Facebook, agree to develop voice assistants that work “seamlessly” with others and build devices that ship with at least one assistant other than their own.
Amazon said that it’ll enable “cloud side” infrastructure for UDCs and ATs within the next year. In a peek at what’s to come, Skullcandy and Native Voice, a tech consultancy, worked together to make Alexa and “Hey, Skullcandy” commands available simultaneously on Skullcandy’s Push Active and Grind Series headphones.
There’s a limit to what UDC and AT can accomplish, what with Google and Apple so far declining to join VII. But even absent Siri or Google Assistant, Amazon makes the case that UDC and AT can deliver superior experiences by combining the capabilities of multiple assistants.
Smart home tools
In a development that’s tangentially related — at least in the sense that it has to do with smart devices — Amazon debuted the Alexa Connect Kit (ACK) SDK for Matter, a software package that extends the existing ACK to devices supporting the Matter Wi-Fi and Thread connectivity standard. Launched in 2019, ACK runs firmware that enables communication between a device and ACK-managed services like logging, usage metrics and over-the-air updates. Essentially, ACK (and, by extension, ACK for Matter) allows companies to use Amazon and Alexa cloud services while having freedom over the hardware that delivers those services.
The ACK SDK for Matter includes Amazon’s “frustration-free setup” (FFS) technology, which lets smart home devices automatically connect to a local network the minute they’re powered on. On the subject of FFS, Amazon says that it plans to expand FFS from devices purchased through Amazon to devices purchased through third-party retailers, beginning with Matter devices from Eve Systems, Leedarson, Sengled and Nanoleaf.
Amazon also says that it’ll soon launch an API for credentials, allowing customers who opt in to share their network credentials with Alexa and other device makers’ apps and services. When this is available, customers will be able to select which of their devices and service providers to share network details with and set up those devices without having to enter an SSID or password. The API will first support Thread credentials for Matter devices with additional, expanded support to follow, Amazon says.
Alexa Ambient Dev Home Kit, also launched today, offers APIs and services targeting smart home use cases, including device and group sync, safety and security and multi-admin setup for Matter. (Multi-admin allows Matter devices to be controlled by multiple smart home systems at the same time.) As Amazon explains in a blog post:
For example, if a customer leaves home and arms their security system, they want their robot vacuum to know they’re gone so it can start cleaning. If they open their smart lights app and rename Guest Room to Office, they want that change to cascade to other systems — including Alexa. The Alexa Ambient Home Dev Kit will enable these experiences along with many more.
New APIs in the Ambient Dev Home Kit extend Alexa Guard, Amazon’s sound-identifying security feature, to third-party apps, allowing customers to be alerted through a custom app when things like audible smoke and carbon monoxide alarms are detected. Other APIs allow customers to expose (somewhat creepily) the state of their home to developers, like “Home,” “Vacation,” “Dinner Time.” Using signals from various devices throughout the house, Amazon says it can determine what’s most likely happening and relay that information to third-party services, devices and apps — for example, adjusting the thermostat and locking doors during the “Sleep” state.
New SDKs
Ambient Dev Home Kit arrives alongside Alexa Voice Service (AVS) SDK 3.0, which combines two existing SDKs — the Alexa Smart Screen SDK and the AVS Device SDK — into a single toolkit for creating Alexa voice and multimodal skills across devices. One of the headliners is video calling between third-party Alexa devices, a feature that was previously limited to Amazon Echo devices. AVS SDK 3.0 also brings support for third-party smart home cameras, letting users see live camera feeds and chat with people at the camera through microphones and speakers.
On the multimedia side, AVS SDK 3.0 ships with the clumsily named “Alexa Video Skills Kit (VSK) over AVS Downchannel,” which can be used to create Alexa skills that control smart TVs. Apps and devices using the AVS SDK 3.0 can let Alexa users search for and play content, open apps, change the volume and more, Amazon says.
A separate new SDK, the Alexa for Apps SDK, lets developers build Alexa functionality, including voice search and navigation, as well as custom interactions into mobile apps. Amazon previously offered Alexa-powered features for mobile through Alexa for Apps, an API, but the service was limited in what it could accomplish. The SDK greatly expands the ways Alexa can be integrated with mobile apps — perhaps signaling Amazon’s ambitions.
“The Alexa for Apps SDK is the first from Amazon to support the development of Alexa functionality within a new or existing app,” Aaron Rubenson, VP of Alexa Voice Service and Alexa Skills at Amazon, told TechCrunch via email. “For example, a fitness app could use the Alexa for Apps SDK to design a workout with Alexa as the personal coach.”
Promotions
Several of the tools announced today are germane to business dev, like Alexa Skill Deals, which gives developers the ability to set up a discount on in-skill purchases and paid skills that last for up to a year. As for Promoted Skills, they let developers run paid campaigns to promote their Alexa skills.
As Rubenson explained when asked about Promoted Skills: “[They] are designed to be similar to how sellers and book authors advertise their products on Amazon’s web and mobile properties. Later this year, developers will be able to set up skill promotions or campaigns designed for Echo Show home screens. In order to set up the campaigns, developers will upload their creative, select a campaign type, set a start and end date, and place a bid.”
Promoted Skills pricing has yet to be determined, Rubenson said.
Meanwhile, Alexa Routines Kit enables skill creators to surface prebuilt routines to customers when they interact with their skill, while the Alexa Shopping Kit allows developers to provide product recommendations (and earn a 10% commission) that customers can add to their shopping carts and lists and then purchase. According to Amazon, Grupo Planeta, the Spanish-language book publisher, is using the Alexa Shopping Kit to deliver book recommendations through Diana, the company’s mindfulness skill.
Amazon’s last bid to lure developers to the ecosystem began last December, roughly, when the company lowered its cut of Alexa skill developer revenue from 30% to 20% for developers who earn less than $1 million in revenue. As my colleague Sarah Perez noted at the time, relatively few developers have been able to capitalize on Alexa’s sizable footprint in U.S. consumers’ homes to create profitable businesses, despite Amazon’s efforts.
That’s not stopping the company from trying. Today, Amazon announced that developers earning less than $1 million in revenue will also receive 10% of their skill’s earnings as an additional “value back” incentive, to be paid out in cash for this year. The increased revenue share goes into effect starting with the earnings for the month of July, Amazon clarified in a blog post.