Does anyone believe that now we can simply talk to our screens and get things done, without filling forms, dragging and dropping objects, and clicking buttons?
Yes, a mere science fiction thirty years ago is going to become reality in the near future. Going forward, the entire field of visual interface design - literally everything we know about placing controls, handling the mouse and touch interaction - even picking colours - will be affected by the switch to conversational form, or will go away altogether.
The Evolution of UI
Graphical User Interfaces (GUIs) simply mean a visual way to interact with a device, whilst Conversational User Interfaces (CUIs) allow one to talk to the UI as a way of interacting with a device. Though CUIs aren’t completely new, they are rapidly becoming smarter, more natural, and therefore more useful.
The UI has evolved from a text-based user interface in the early generations of computers, where the interaction with one’s computer was a series of text inputs and outputs. The graphical user interface (GUI) became popular in the early 1980’s, and has dominated technology ever since.
Though GUIs are here to stay for some time and cannot be done away with altogether, their monopoly is going to be challenged more than ever before by the rise of Conversational User Interfaces (CUI) bots, virtual assistants and invisible apps. As they become mainstream, Conversational User Interfaces (CUI) will forever change the way we design, develop and interact with our technology. In short, the Conversational UI is going to be the next big digital disruption.
We are rapidly building intelligent applications that we can talk to in order to get things done. As the basic technologies required for this are already in place, we may witness a situation where conversational user interfaces may replace GUIs altogether, to become the gateway to the digital world.
CUI doesn’t just mean conventional speech technologies. Natural language recognition makes it possible to understand the intention of our words, even when we misspeak. This has been made possible due to advances in machine learning and Big Data. Moreover, as CUIs become more sophisticated with AI, their application goes beyond our imagination.
Graphical user interface is a little more complex, as it requires many clicks through different screens. We need to manipulate a number of graphical objects such as buttons and text boxes to accomplish simple tasks like booking a ticket, making a reservation for our favourite show or online shopping.
There has been a rapid increase in information and tasks available to us. This requires application designers to capture more functionality into ever decreasing screen size. In the near future, interface screens may be replaced with a conversational user interface that enables the users to talk with their devices.
Mobile personal assistants like Apple’s Siri, Nuance’s Dragon Mobile Assistant and Samsung’s S-Voice can rightly be described as first-generation CUIs. Though the apps help us with some basic tasks, the scope of their comprehension and helpfulness is currently limited.
Amazon’s echo is a digital assistant like Siri or Google now. It accepts voice commands and responds with answers in natural language by name Alexa. One can use this cloud connected CUI to perform various tasks right from ordering it to play favourite music to controlling home appliances lights and thermostats.
User Benefits of a CUI
Google, Facebook and Amazon can easily know the user’s purchase behaviour through analytics. However, CUI will understand our tastes, our routines, what ingredients are running low in our refrigerator, our moods and feelings… all based on our previous interactions.
While driving, CUI will answer texts and check directions, enabling us to drive without ever taking our eyes off the road. The service doesn’t stop there, but extends to muting notifications and playing our favourite music to avoid driving distractions.
The interface allows us to double-check the inventory tracking parameters of a product in a massive warehouse without having to stop a forklift.
CUI will provide the most capable Virtual Assistants to ever hit the market. This is the coolest part about the CUI of the future. Those of us with a fully functioning set of senses and limbs can handle GUI tech well. But for users with disabilities of sight or limbs, CUI will open up the world in ways GUI never could, and never will.
CUIs are Intelligent Interfaces that understand, comprehend and apply the information fed to them. While receiving a diagnosis based on our symptoms is plain utility, to understand those symptoms, locate the nearest specialist, and cross-reference your schedule for an appointment based on the doctor’s availability is intelligence. This is the future we can look forward to with CUI.
A CUI can allow people to talk about virtual objects or upcoming events that have no graphical representation, such as a distant GPS location or an upcoming payday.
The pain points of CUI
However, there are drawbacks. As it doesn’t have a visual side, CUI can only be perceived by the ears, and one could be competing with other conversations on the same channel.
Besides, certain tasks like multiple selections, document browsing, and map search are still far better performed by GUI with a pointing device and buttons to click.
Having said this, voice recognition accuracy has improved dramatically in recent years; language and reasoning programmes have reached a useful level of sophistication. Better models of cooperation and collaboration are on the rise. Going forward, we’ll soon have intelligent and fully conversational interfaces that will be adaptable to just about anyone.
This is perceived to be the interface of the future, made even more necessary as computing propagates beyond laptops, tablets and smartphones to smartcars, thermostats, wearables and even home appliances, connecting 5 billion devices on the go.
As dual capabilities, CUI and GUI are changing rapidly, so it’s time for developers to prepare and adapt quickly. This is a unique opportunity for designers to redefine their relationship with these technologies.
The words “We shape our tools and thereafter they shape us’’ echo this sentiment and what is going to come in the future.
Did you know that police officer's love using smart devices like Motorola's LEX 700 which can run apps?