As part of its effort to make Android devices more accessible to people with disabilities, Google has launched Voice Access Beta. It offers a far wider range of voice-controlled functions on Android devices and includes more spoken recognition and feedback.
It takes getting used to and isn’t entirely intuitive, especially if you’ve been using “OK Google” and related, ordinary Android voice commands. For me, it was pretty buggy and awkward at first.
Essentially, it allows users to do anything (almost) they could do with the touchscreen with voice commands and control. As indicated in the demo GIF above, numbers are associated with apps and functions and used to navigate. The touchscreen is disabled while Voice Access is operating, which can be frustrating, because Voice Access isn’t always as efficient as touch.
As a tool for the disabled, it’s terrific. Most regular Android users, however, aren’t going to want to substitute it for their traditional touch experience — except for the novelty of trying it out.
What’s intriguing to me is the way that it points toward more voice actions and “conversational” interaction with devices in the future — along the lines of the Star Trek computer that Google has wanted to build from the early days. Indeed, as AI and natural language understanding continue to improve, we will increasingly speak to our devices, and they will speak to us.
There’s plenty of evidence of this, from a pick-up in voice search/voice assistant usage to new “assistant” user-interfaces (e.g., Amazon Echo) that have no traditional screen display. We are being conditioned by these technology advances to interact with mobile devices and newer machines in very different ways than we do with PCs and the conventional Google SERP.
Comments