It is undeniable that technological advances are easing many aspects of life. An example of this is the automation of industrial processes which reduces the need for physical labouring thereby freeing people up for more complex decision-making.
In recent years, however, the development of artificial intelligence (AI) is paving the way to a new era of decision-making in which machine learning (ML) uses algorithms to process and analyse data. In theory, more complex decisions can now be reached through ML.
Mignon Clyburn, Principal at MLC Strategies LLC, moderated a topical discussion at CES that centred around consumer safety in AI. She was joined by Pat Baird, Head of Global SW Standards at Philips and Joe Murphy, Founder of Vocalize AI.
The trio posited their stances on the ‘trustworthiness’ of AI, Baird noting that this is one of the most contentious aspects within its usage. People fear what they don’t understand, but he argued that blame is too harshly attributed to the industry when problems occur. Though ‘human error’ is widely accepted in excusing imperfection, a higher bar is set within AI.
Who is accountable?
As with most aspects of life, someone must be held responsible. Controversy surrounds blame in arguments around AI, and though this should be dealt with on a case-by-case basis, Murphy suggested that many situations act as “a good example of when things are left unsupervised.”
He recalled a recent scenario in which a 10 year old girl asked her Alexa device for a challenge. The smart speaker responded, telling the girl to “plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs”.
Prior to the incident, the challenge had been circulating the internet and was repeatedly performed on Tik Tok. When Alexa does not have a pre-programmed answer for questions, it searches the internet, so found popular websites, and consequently this trend. Murphy also said that “a lack of oversight and common sense [may have been] applied”.
In this example, which uses an end-to-end system, Murphy believed the developer of the system to be responsible as there could have been better oversight to prevent this.
Automation gone wrong
Of course, there are countless examples of the failures and shortcomings of AI. One such instance was when a self-driving Uber SUV hit a pedestrian as she wheeled her bicycle across the road. Yet, even in this situation there was a backup safety driver that failed to act accordingly as they were watching an episode being streamed on their phone.
In 2016 Microsoft introduced a chatbot built upon AI onto Twitter. She was designed to speak in a certain manner by using ML to process phrases and other data. However, other Twitter users learned that they could manipulate her language through their own which saw a series of racist and anti-feminist tweets emerge. TB Tech said: “What was supposed to be an inane social chatterbot became a vehicle for hate-speech showing that, in the wrong hands, AI can be used for evil.”
In both of these instances, it is important to highlight the role – or lack thereof – of humans. It is anticipated that AI can lend a helping hand to, and perhaps in some situations even usurp, people. It is apparent that AI is not yet at a stage where it is fit to function standalone for mission-critical purposes.
As Baird reinforced during the CES seminar, not all AI is the same. It should be assessed with a risk-based approach. All of the panellists were in agreement that only negative stories make headlines; the hundreds of thousands of victories – big or small – that AI has achieved tend to be glossed over. Factors, such as context of use and limitations, must be taken into account to evaluate the efficacy or practicality of using AI in different fields.
Clearly, the capacity for AI to revolutionise industries is astounding. Sectors like healthcare can benefit from voice recognition which improves physician productivity and the quality of overall care. Much like everything, though, it needs to be carefully and constantly regulated and revised.