Artificial Intelligence is one of the biggest topics at the 2018 Mobile World Congress, and with good reason. It has the potential to revolutionize every aspect of human existence, from the way we work to the way we play. However, there are growing concerns that unless future AI advances take an ethical approach, just like a sci-fi novel, the moral of the story could be a very difficult lesson to learn.

“As engineers and technologists, we should always be mindful that there are consequences to our work,” explains Rob High. The VP and CTO for IBM Watson has become a key voice in the debate around AI and ethics and this week led a dedicated session on the topic at the 2018 Mobile World Congress in Barcelona, Spain.

A force for good or bad?

Despite the excitement that permeated this year’s Mobile World Congress, even the most enthusiastic tech firms concede that AI won’t be a genuine force in the world for another 20-25 years. However, that doesn’t mean that we should hold off debating the ethics or moral stances need to ensure the tech honestly delivers.

According to Gartner, within the next two years, AI bots will account for 85% of customer service interactions. Likewise a host of car companies, from BMW to Tesla and Volvo are touting 2021 as the year that the first cars with Artificial Intelligence, rather than a driver behind the wheel, will officially hit the road.

Unexpected consequences

“AI opens up whole new fields of potential value,” says Rob High. “But with that comes uncertainty about unexpected consequences…We need to be intelligent about how we create and enable it, how we control it and what we demand of it. This is why the ethics of artificial intelligence are so important.”

It’s a point on which Sitel Group is aligned. “Look at the number of research papers about the social dilemmas that an AI-powered self-driving car will have to overcome if it is really going to be able to mimic a human driver,” says Stéphane Akkaoui, head of R&D at Innso, the Sitel Group’s software venture. “In the case of an accident, how will it decide who to protect – its occupants or the child that has run into the road causing the accident?”

Musk moves on

Indeed, on this very topic, in the days before MWC got underway, Tesla CEO Elon Musk stepped down from his role on the board of OpenAI, an independent research group committed to keeping AI open and honest, that he helped to found. He cited a potential conflict of interest for his decision. His electric car company is now so invested in AI that Musk believes he could become one of these “unexpected consequences”.

A human approach to AI

“At the heart of the matter is that computers lack the things that make us human. Social ways of thinking, ethics and morals are all abstract concepts,” points out Stéphane Akkaoui. It’s why Sitel Group has decided the best way to inject morals and ethics into AI developments is to combine them with human capabilities, rather than create solutions that replace a person. “We promote the combination of Human and AI to power the Customer Experience. Whatever our endeavor, we stick to the idea that ethics morals, quality and efficiency are more important than costs or automation for the sake of automation,” says Stéphane Akkaoui.

Sitel Group