Conversation – the Voice-Interaction Feature
Conversational Platforms have come a long way in the past few years. Incredible processing power, coupled with lightning-fast cloud connectivity, has given rise to many popular services and ‘personalities’ (Siri, Google Assistant, Alexa,…).
All three companies, Apple, Google, and Amazon, and many more, continue to make significant investments in their conversational platforms.
Personally, I’ve noticed conversational technologies starting to arrive in some of my smartphone apps. Have you?
Instead of interacting with an app through typing, clicking and swiping, conversational interfaces allow users to interact with an app through more natural methods of speaking and listening.
Conversation Support in an App
As an app developer, assembling the elements required for conversational interactions can be daunting. One has to integrate Automatic Speech Recognition (ASR) with Natural Language Processing (NLP) modules, including relevant Natural Language Understanding (NLU) workflows and provide Text-To-Speech (TTS) conversions.
ASR provides the capability to convert Speech to Text. At a high-level NLP deals with the Syntax of the text (grammatical constructs), whereas NLU deals with the Semantics (Understanding the meaning) and Pragmatics (Understanding the intent) of the text.
OS-specific Conversational Platforms, Siri (on iOS) and Google Assistant (on Android) offer basic ASR, NLP but limit the NLU to only what is pre-defined by an app with the OS. Lex (from Amazon) and DialogFlow (from Google) are OS-independent and can offer everything that the OS Conversational Platforms offer plus offer the app to perform more app-specific NLU.
Onymos Conversation Feature
It shouldn’t be a huge surprise that Onymos has developed a Conversation Feature. Like with all Onymos Features, we strive to simplify and abstract the complexity of the underlying platforms (iOS, Android) and backend cloud systems.
Ask things like “Where is the milk?”
As of this writing (mid 2020), Onymos Conversation is in beta. We expect it to be commercially available by the end of Q3 2020.
If you’d like to learn more about this capability, or have specific technical questions, fill out the Contact Us form available here.