Amazon Echo, housing the Alexa intelligent agent
When it comes to next-generation technologies, like AI, the discussion is binary. The boosters, industry insiders and developers, see the social benefits, while detractors only half-jokingly predict Terminator-style dystopian futures.
Could both be right? Can social and consumer benefits exist even as these new technologies lead to civil rights abuses and creeping authoritarian control?
While new technologies like virtual assistants and AI have great potential benefits, they can also be abused if ethical and legal restraints aren’t simultaneously imposed. Therefore, promotion of the positive social outcomes of new technologies such as AI should coincide with a discussion about the potential for exploitation and abuse. For example, while AI might help make job recruiting, health care or bank lending more efficient, it might equally lead to unfair outcomes and potential discrimination.
Earlier today, Amazon’s Alexa was accused of “eavesdropping” on a Washington-state couple and sending their full, recorded conversation to a random person in the couple’s contacts database. Reportedly, the machine was triggered by a misinterpreted wake word, which initiated the recording, and then it also misunderstood another aspect of the conversation as a command to send the recording to the contact.
Amazon acknowledged the mistake and says it will quickly correct the problem. A different version of this happened in October 2017 with Google Home Mini. That was also a “bug” that Google addressed.
While it’s clearly not the intention of Google and Amazon to spy on their customers — some skeptics might debate that point — it’s very disturbing when these episodes occur. They remind us what could go wrong: the “Black Mirror” version of reality.
In a more authoritarian society, these devices could be used to maintain surveillance of those perceived as a threat to the government or other powerful interests. China is already an example of a near-total surveillance state using advanced technology, including facial recognition, in an effort to maintain political and social control. These are the same technologies available here and being used in commercial and other “beneficial” capacities.
In the West, you have consumer convenience; in China, dystopian control. And while China and the US are not the same, does the simple existence of technology such as facial recognition make its abuse inevitable?
Amazon’s computer vision and facial recognition (Rekogition) is now being deployed in public spaces in US cities in Oregon and Florida for law enforcement purposes. Will this help local law enforcement make cities safer, or will it result in abuses? Your answer may largely depend on your perceptions of law enforcement.
In a 2017 criminal case in Arkansas, prosecutors sought Alexa voice recordings in a murder investigation. Amazon fought to prevent authorities from getting access to these recordings without a warrant. The defendant ultimately consented to the release of the data, so the warrant issue was never formally decided. But as more American homes install smart speakers — we have seven in our home — will police routinely seek stored conversations in criminal investigations? The temptation will be great.
Use of next-gen technologies, therefore, needs robust oversight. Self-regulation is not enough; 2016 and its aftermath have proven that. Indeed, despite many years of Silicon Valley pledges of fidelity to consumer control, privacy and transparency, it’s GDPR (albeit flawed) that is motivating many tech companies to only now deliver on these promises.
We’re now at an inflection point where technology sophistication, including AI, is accelerating. I was in the audience at Google I/O when the company debuted “Duplex,” its impressive AI-conversational capability that seemingly passed the Turing test. Can you imagine that coupled with nonstop robo-calling from an offshore call center?
What I’m saying is that for every impressive advance that offers speed, convenience or efficiency to end users, we need to have a corresponding conversation about how to prevent or minimize the flip side: abuse. And these conversations should be proactive, rather than reactive. Otherwise, “Black Mirror” or “The Handmaid’s Tale” won’t be cautionary, fictional shows that creep us out — they’ll be our reality.
Comments