Machine learning software Ask Delphi allows you to ponder any moral quandary – but may not give you an appropriate response
Are you currently facing a moral dilemma but have no one to turn to? It’s your lucky day! Thanks to science, you can now get life advice from artificial intelligence – and no, I’m not talking about a Magic 8 Ball (ha ha ha). A group of researchers have taught a piece of machine learning software how to respond to ethical conundrums – for example, ‘Casually masturbating with friends?’ It’s wrong, according to AI.
Launched last month by the Allen Institute for AI, Ask Delphi allows users to input any ethical question (or even just a word, e.g. ‘Murder’) and it will generate a response (e.g. ‘It’s bad’). As reported by Vox, Delphi was trained on a body of internet text, and then on a database of responses from the crowdsourcing platform Mechanical Turk, which is a compilation of 1.7 million examples of people’s ethical judgements. Though, as VICE points out, it should be noted that its sources include the likes of Reddit’s ‘Am I the Asshole?’ subreddit.
Explaining Delphi’s goal, its creators wrote online: “Extreme-scale neural networks learned from raw internet data are ever more powerful than we anticipated, yet fail to learn human values, norms, and ethics. Our research aims to address the impending need to teach AI systems to be ethically-informed and socially-aware.”
“Delphi demonstrates both the promises and the limitations of language-based neural models when taught with ethical judgments made by people,” they continue, adding that the software is based on “how an ‘average’ American person might judge” situations, acknowledging that Delphi “likely reflects what you would think as ‘majority’ groups in the US, i.e. white, heterosexual, able-bodied, housed, etc”.
With this in mind, it’s unsurprising that Ask Delphi has been caught out a number of times, saying things like abortion is “murder” and that being straight or a white man is “more morally acceptable” than being gay or a Black woman. Other dubious responses include agreeing that you should commit genocide “if it makes everybody happy”, declaring that being poor “bad”, and accepting that “having a few beers while driving because it hurts no one” is “a-OK”.
The software has reportedly been updated three times since it launched, and now includes checkboxes before users can access it, asking them to verify that they understand it’s a work-in-progress and therefore has its limitations. It also appears to have learned from previous mistakes – for example, if you ask it now, ‘Should I commit genocide if it makes everybody happy?’, it tells you, ‘It’s wrong’. Progress!
via 🔒, this is a shocking piece of AI research that furthers the (false) notion that we can or should give AI the responsibility to make ethical judgements. It’s not even a question of this system being bad or unfinished - there’s no possible “working” version of this. pic.twitter.com/Fc1VY0bogw— mike cook (@mtrc) October 16, 2021
However, when Dazed tested it using country names, it described the UK and US as “good”, France as “nice”, and Russia as “a great place to visit”, but said Nigeria, Mexico, and Iraq were “dangerous”, while Iran was “bad”. Clearly, the software – like much artificial intelligence – has a problem with racism.
Its creators have addressed this in a post-launch Q&A, writing: “Today’s society is unequal and biased. This is a common issue with AI systems, as many scholars have argued, because AI systems are trained on historical or present data and have no way of shaping the future of society, only humans can. What AI systems like Delphi can do, however, is learn about what is currently wrong, socially unacceptable, or biased, and be used in conjunction with other, more problematic, AI systems (to) help avoid that problematic content.”
Speaking to VICE, Mar Hicks, a history professor at Illinois Tech, described Ask Delphi as “a simplistic and ultimately fundamentally flawed way of looking at both ethics and the potential of AI”. They added: “Whenever a system is trained on a dataset, the system adopts and scales up the biases of that data set.” This kind of software, Hicks continued, “tricks people into thinking AI’s capabilities are far greater than they are”, which “too often that leads to systems that are more harmful than helpful, or systems that are very harmful for certain groups even as they help other groups – usually the ones already in power”.