As the use of artificial intelligence systems in the enterprise continues to grow, so do questions about AI ethics. How can we trust something we don’t understand and how do we know if results are biased?

According to the PwC 2018 AI Predictions report, such is the speed at which the technology is developing, if someone says they know exactly what artificial intelligence (AI) will look like and do in the next 10 years, smile politely, then change the subject, or walk away.

And while no one working in the industry can say what it will be capable of doing in 2029, what all stakeholders are agreed on is that over the next decade, AI will be part of the fabric of our everyday lives, touching every aspect of society, hopefully for the better.

The power to change the world

“AI is key for addressing many of the grand challenges facing the world, such as global health and wellbeing, climate change, reliable legal and democratic systems,” said Pekka Ala-Pietilä, chair of the European Union’s High-level Expert Group on Artificial Intelligence. “AI is one of the most transformative forces of our time. It presents a great opportunity to increase prosperity and growth.”

For many businesses, these opportunities are already clear. In the retail industry alone, Capgemini data suggests organizations will spend $7.3 billion on AI initiatives by 2023. Meanwhile according to the latest NewVantage Partners study, 91.6 percent of Fortune 1000 companies have begun investing in AI-related technologies in a race to keep up with early adopters who are already experiencing “moderate” or “substantial” benefits.

Indeed, from chatbots and voice bots to self-driving cars and automated cancer diagnoses, there seems to be no end to AI’s potential applications. But as excitement mounts so quite rightly, does concern about how it is used and how it should be regulated.

“We need to be smart and open about how we develop AI and how and why we apply it,” begins Martin Wilkinson-Brown, Chief Marketing Officer of Sitel Group. “This is why we need AI ethics; we need to devise a human-led approach to this technology.”

The black box problem

In its current form, AI can automate something as simple as capturing the spoken word and translating it into text to fill out a standard form or something as complicated as beating Gary Kasparov at chess.

However, the more an AI leverages machine learning and neural networks to crunch immense amounts of data, the less likely a human is to understand how the AI arrived at its conclusion. This is known as the black box problem. If a system is so complicated that a user doesn’t understand how it works, how can we trust the decisions it makes?

The most often cited example is the autonomous car. How would an AI-enabled self-driving car negotiate the social dilemma of deciding who to save in the event of an accident – the car’s occupants or the person that had run into the road? And just as importantly, how would we understand and trust how it arrived at the decision?

Building trust in AI by putting people first

The autonomous car is still some years away, but consumers are asking these types of AI ethics questions. The 2018 Salesforce Trends in Customer Trust study shows that over one third of consumers globally don’t trust AI and 54 percent don’t believe organizations act with their best interests at heart, while 60 percent of all respondents are concerned that a company’s use of AI could compromise their personal information.

“It’s only natural not to trust something you don’t understand and trust is central to doing business,” Wilkinson-Brown explains. “From data breaches to media stories about the potential impacts of AI, consumers are becoming increasingly concerned about how their data is used and shared and are gravitating toward brands that focus on transparency. As innovative as AI is, it can’t continue to develop or develop in the right direction unless it is trusted by society.”

At Sitel our vision for AI is for assisting and augmenting our people’s capabilities and all of our research and development into the technology has been focused on how it can empower its users.

“And that’s why trust is of critical importance,” Wilkinson-Brown explains. “Our people have to have complete confidence in the technology and in the insights or services it provides. It’s why whatever our endeavor in this field, we put AI ethics first and include our people in the R&D phase.”

Tech companies taking a human approach

The major tech companies, aware of these growing concerns have started taking similar steps. Microsoft has had FATE (Fairness, Accountability, Transparency, and Ethics in AI) since 2017, a committee that aims to remove discrimination in algorithms, enhance fairness in outcomes and according to a company spokesperson, answer the question: “As we move toward relying on intelligent agents in our everyday lives, how do we ensure that individuals and communities can trust these systems?”

While on March 26 this year, Google appointed an independent external advisory council to advance the responsible development of its AI projects. “This group will consider some of Google’s most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work,” said Kent Walker, SVP Global Affairs.

However, after Google employees started asking questions about the makeup of this board, Google disbanded the advisory council and has pledged to go back to the drawing board.

Governments are taking notice

But installing AI ethics and building trust in what the technology can deliver is the responsibility of institutions as well as corporations and governments are also drawing up their own guidelines for the development and application of AI.

“Trustworthy AI will be our north star, since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology,” said Ala-Pietilä, who will publish an official set of Ethics Guidelines for AI covering the entire European Union in April. “We must ensure to follow the road that maximizes the benefits of AI while minimizing its risks. To ensure that we stay on the right track, a human-centric approach to AI is needed.”

Likewise, as part of its commitment to become a world leader in the development and application of AI, the US government has also pledged to put mechanisms in place that according to a White House statement issued in February will: “Foster public trust in AI systems by establishing guidance for AI development and use across different types of technology and industrial sectors.”

These moves will help to address the black box conundrum. With a set of transparent and universally accepted rules regarding how algorithms are developed, it will be easier to have faith in the conclusions such systems arrive at.

“Computers lack the things that make us human. Social ways of thinking, ethics and morals are all abstract concepts and the self-driving car example is a very good illustration,” says Wilkinson-Brown. “But applying rules and regulations reflective of humanity to how the algorithms are developed is just one part of the problems of AI ethics. Bias and prejudice, even if they’re unconscious, are also uniquely human traits. So guidelines have to address this issue, too if we’re really expected to trust AI-derived outcomes.”

How do you remove human bias?

An AI system’s performance is directly related to data quality. But even if the data is good, if the system has been built to consider the data in a narrow way it won’t deliver unbiased results.

“Imagine a system for predicting which existing employees should be promoted and choosing new hires from a list of potential candidates,” hypothesizes Wilkinson-Brown. “Now imagine it analyses the data based purely on managers’ reports and assessments of existing staff. That system will only favor those people that align with managers’ subjective perceptions.”

This issue is why Facebook has developed Facebook Flow. First announced at its 2018 F8 developer conference, it is still in its testing phase but, the company claims, is capable of checking if a machine learning program is biased in how it processes data.

It’s also why this March, Amazon teamed up with the National Science Foundation to offer grants of up to $10 million to researchers who are working on developing ways of injecting fairness into AI.

“With the increasing use of AI in everyday life, fairness in artificial intelligence is a topic of increasing importance across academia, government, and industry,” said Prem Natarajan, VP of natural understanding in Amazon’s Alexa AI group. ”Here at Amazon, the fairness of the machine learning systems we build to support our businesses is critical to establishing and maintaining our customers’ trust.”

Can you trust your data?

But bias can also creep in because of data itself. A February 2018 study by MIT and Microsoft found that three commercially available facial recognition systems had an error rate of 34.7 percent when it came to identifying women with darker skin, compared to a 0.7 percent error rate for identifying lighter skinned men. The reason for the huge difference in error rates was the data. It was weighted towards examples of lighter-skinned men and so was giving biased results.

“Think about something as comparatively simple, in the scheme of AI things as a chatbot,” says Wilkinson-Brown. “Our group has implemented dozens of successful chatbots because of our approach to data. You need a huge data set and you need the rigor to optimize it based on real-world input and context. To avoid skewed results in any AI endeavor you must take the same data approach.”

To put data volumes and outcomes into perspective, our R&D department recently completed an AI-focused sprint with the goal of building a tool that gives our contact center representatives the right answer to a specific question in a specific conversation with a customer.

“Just for the first prototype, we needed a dataset of 1.4 million interactions to analyze and cluster into a pool of intentions or responses,” explains Wilkinson-Brown. “The suggested answers to customer questions are drawn autonomously from this pool and while it’s a large enough dataset to develop an initial solution, millions more interactions would be needed to improve the system sufficiently so that it could be industrialized.”

A business opportunity

Without transparency of operation and an equally transparent and rigorous approach to data, neither employees nor consumers will completely trust AI. And as more organizations begin to realize its potential and as breakthroughs in the field continue to arrive at speed, progress has to be made in terms of AI ethics and responsibility.

“Governments are starting to exert some power in this area but this is an opportunity for businesses to take the lead and to lead by example,” says Wilkinson-Brown. “By involving all stakeholders and by clearly articulating a responsible approach to and application of AI, organizations can build new levels of trust with their customers and empower their workforce to achieve more.”


Sitel Group


contact_form_siteldesk