Voicing Concerns over Voice Assistants & Voice Security
As the global installed base for smart speaker systems, from Amazon’s Alexa to Apple’s HomePod and Google Assistant, continues to grow, so do stories in the media warning about their potential voice security flaws.
As well as being Amazon Alexa’s fifth birthday, this November marks the publication of the latest Sitel study – Preventing Fraud & Preserving CX with AI – an in-depth report based on responses from 1,200 U.S. consumers conducted this fall and published in partnership with CallMiner.
Although digital voice assistants have been around as an integrated part of smartphone operating systems since Apple introduced the world to Siri back in 2010, it wasn’t until Amazon came up with the idea of making the AI-informed vocal assistant a standalone device that things have really taken off. Over the past five years, the smart speaker category has gone from an invite-only initiative for existing American Amazon customers, to a market with a global installed base of 200 million devices and one that over the third quarter of 2019 alone experienced another 44.3% growth spurt.
Talking voice security – are voice assistants secure?
And as these devices start becoming a fixture in the nation’s living rooms, consumers are starting to ask questions about how secure they are for purposes other than conducting web searches and streaming music.
Even across the youngest, most tech-savvy cohort, our Preventing Fraud & Preserving CX with AI report finds security around these devices is becoming a growing concern. Just 23% of millennials, 19% of Gen Zs and 14% of baby boomers said they would feel comfortable making a purchase via a voice assistant. When asked to elaborate the top answer given for those fears was the potential for a bad actor to hack into a smart speaker and steal their information.
Speaking of negative press
In April it came to light that Amazon listens to selected Alexa voice recordings for the purposes of quality control and for improving the speech recognition and natural language understanding systems. In July, Google also admitted to the same practice. And even though in each case the recordings were anonymized, the backlash was such that since August, Amazon, Google and Apple have all officially stopped the practice.
Away from in-house listening, in October researchers detailed how they’d circumvented both Amazon and Google’s voice app development rules in order to create Horoscope apps that continue to listen to their users even after they’ve been closed. Berlin-based Security Research Labs (SRL) undertook the exercise simply to highlight weaknesses in both companies’ ecosystems and in each case the offending apps have been immediately pulled and the vulnerability patched.
Perception versus reality
The reality is that all communication channels have the advantages and drawbacks – voice is no less insecure than traditional email or chat. Current news reporting on smart speakers is simply putting it in the spotlight. Nevertheless, the news cycle on one level is adding to consumers’ existing concerns around fraud in any channel. Our research found that while 46% of respondents had been victims of fraud, 92% of consumers believe fraud risk is increasing due to our day-to-day activities, even though fraud prevention measures, be it artificial intelligence-powered cybersecurity and two-factor authentication for online interactions and the strongest PCI measures in contact centers where credit and debit card information is handled have never been more robust.
However, this isn’t blind panic. Our findings underline that consumers understand how different channels present different potential risks based on how they’re used. For instance, only 13.89% of respondents feel a voice assistant presents the most potential for fraud if used for contacting a brand’s customer service department, meaning it’s perceived as more secure than a voice call (14.75%) or reaching out via social media (46.92%).
Service not sales, yet
Yet, when it comes to making a purchase, 65% of consumers say that they wouldn’t feel comfortable conducting a sale through a smart speaker system. And while 21% of consumers say that they would be prepared to use Alexa or Google Assistant to make a purchase, the latest data from Voicebot.Ai suggests that to date, only 4% of smart speaker owners have used their devices to make a purchase and only 2% of owners use them to shop on a regular basis.
There is little doubt that voice is going to play a major role in the future of customer engagement. As the accuracy and trust are developed within voice assistants, this will translate into better customer experience across all voice channels, including the contact center – it’s one of the ways in which our own partnership with CallMiner is helping, using advanced technology to provide greater service to our clients and their consumers.
Technology is one step. The organizations that succeed in unlocking this channel’s full potential will be the ones that are able to clearly communicate the steps they’re taking in relation to security and data protection. A consumer’s perception of reality is reality until that perception is changed. Sitel Group has been – and remains – hard at work preparing that foundation so our clients have the confidence to bring these new channels to their customers while ensuring their safety.
To learn more about how U.S. consumers feel about fraud and data sharing across all channels from the traditional phone call and SMS to social media and voice assistants and to understand how security expectations change from industry to industry, download our report.