Amazon announced the AZ1 Neural Edge processor at its autumn event. Thanks to this silicon unit, Alexa’s responses to users’ queries and commands will improve with hundreds of milliseconds per response.
The company is implementing this hardware unit in cooperation with MediaTek. Thanks to this, in new products, neural speech recognition feature can be used in the device. Amazon’s newly introduced products such as the new Echo smart speaker, Echo Dot, Echo Dot with clock, Echo Dot Kids Edition and Echo Show 10 smart display include this processor. Amazon says these products also include the upper device memory required for this level of processing capability. The AZ1 will be found in more Echo series in the future.
Amazon’s existing products that do not carry AZ1 send both the voice and its corresponding interaction to the cloud, that is, to the remote server, where after processing, the answer comes back. By comparison, in new products featuring AZ1, audio will be processed on the device, which will shorten the response time to users. Although how these devices process your voice and how it displays your voice history, the Alexa app remains the same.
Amazon notes that the gains on the delay side will apply to American English first, but over time more languages will be supported.
This collaboration between Amazon and MediaTek brings to mind Microsoft’s SQ1 processor-focused collaboration with Qualcomm for the Surface Pro X. In practice, AZ1 looks more like the Neural Core processor used by Google in Pixel 4. In addition to enhancing photography capabilities, this special processor enabled the device to understand spoken English, and it was able to translate audio recordings into text without the need to connect to the internet.
Amazon did not provide any information that Echo devices can be used without being connected to the internet. However, performing more operations on the device instead of in the cloud will improve the user experience.