Below are ‘talking points’ based on an article in AH No. 121, ‘AI on the Go: Notes on the current development and use of Artificial Intelligence’, by Carl Mahoney. Carl is a Humanist Society of Victoria member, and was professor and Dean of the Faculty of Architecture and Building, University of Technology, Papua New Guinea.
As a software engineer, I have an interest in Artificial Intelligence or AI. A number of statements made in the article are controversial. At a discussion group organised by the ACT Humanist Society I led a session that explored some of these statements, and looked deeper into the implications of AI from the Humanist perspective. Here I present some of the questions and issues we discussed, as a starting point for your own group or personal exploration of the topic.
What is AI?
In his article Mahoney does not provide a definition of AI. He mentions a great many technological innovations. Some of these are examples of AI, some are not. Here’s a definition of Artificial Intelligence from the Oxford Dictionary:
‘the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.’
Some of the innovations mentioned in Mahoney’s article include:
- ancient Greek and Chinese automata
- big data
- autonomous vehicles
- keystroke monitors
- language translation
- composing music
- visual arts
- remote-controlled devices
- the Internet of Things
- industrial robots
- robotic vacuum cleaners
- ‘brain peripherals such as those for sight and sound’ (I’m not sure what Mahoney meant by this. He gives no examples. Perhaps he meant cochlear and retinal implants?)
Question 1. for group discussion. Choose one or two of these examples from the article. Do you think it is AI? Why / why not?
An interesting example is autonomous vehicles and robots. These can be classified into three types:
- Those that move in a set way (fixed or pre-programmed), like the ancient Greek and Chinese automata. Mahoney gives an example of a mining vehicle that collided with a human-driven vehicle in Western Australia recently. That was a more sophisticated example of this type.
- Those that are remote-controlled, like military and hobby drones, deep sea submersibles, and bomb disposal robots.
- Those that use sophisticated programming to achieve a goal while receiving and processing the inputs of their environment. Google’s autonomous vehicles fall into this category.
Mahoney does not distinguish between these in his article. Only those in category 3 are genuinely AI.
Other examples of AI
There have been many developments in Artificial Intelligence that were not mentioned in the article. Some have made headline news over the last few years.
Question 2. Give one example of AI that you know of. What does that development mean to you?
Here are some I find interesting:
- Aipoly – phone app for the blind. Machine learning, visual processing
- Apple’s Siri – phone app. Speech recognition, voice control, information retrieval
- Google’s Cloud Speak - speech recognition. Used to power voice control and information retrieval for phones and tablets, amongst other things
- Microsoft’s Tay – a chatbot created for 18- to 24- year-olds in the U.S. for entertainment purposes, released on Twitter March 2016. Machine learning. Microsoft forced to pull the plug on Tay and delete offending tweets, after Twitter users taught her racist hate speech http://bit.ly/1VPvSoq
- Google DeepMind’s AlphaGo – deep neural networks combined with reinforcement learning. Won a challenge competition (best of 5) against Lee Sedol March 2016
- IBM’s Watson - natural language processing, machine learning, information retrieval. Won first place $1m prize on Jeopardy!, playing against two former winners. Watson was not connected to the internet during the game
- IBM’s Deep Blue – brute computational force chess player. Beat world chess champion Garry Kasparov in 1997
- Honda’s Asimo – humanoid robot designed as a companion. Understands and can act on requests (Not for sale)
- Aldebaran’s NAO, Romeo and Pepper. Pepper is designed as a companion, can recognise emotions, hear and speak, and can be adapted via down-loadable apps. It is available for sale currently in Japan only for around $1600 USD with a service plan of $1200 USD per month!
- Alpha 2 – small humanoid robot designed as household companion. Due to be released in March 2017 for $1990 USD (http://bit.ly/2cvHPBh)
- Hiroshi Ishiguro’s humanoid robots – actually not AI (his Geminoid is remotely operated, for example) but worth a mention because of their very lifelike appearance
- In Japan the hotel Henn-Na (‘Weird Hotel’) is staffed almost entirely by robots http://bit.ly/29cXsMw
Throughout his article, Carl Mahoney makes statements about ‘the computer’, for example:
‘AI is a child of the computer’, ‘The computer is now a fairly mature device’, ‘the computer is inherently regimented and logical’, ‘the computer has an inbuilt tendency towards what we call in human terms “fascism”’, ‘All matters normally given over to human discretion are difficult for the computer to handle’, ‘the difficulty of using the computer to manage our affairs’.
Question 3. Given the huge variety of computational devices that have been created, does ‘the computer’ have any meaning?
In my opinion, talking about ‘the computer’ in a discussion on AI is akin to talking about ‘the animal’ in a discussion on intellect. Saying ‘the computer is inherently regimented and logical’ and ‘All matters normally given over to human discretion are difficult for the computer to handle’ is like saying ‘the animal is inherently vague and unpredictable, and will never be good at logical reasoning’.
Machines you can relate to
Even in the absence of AI, sometimes a machine seems to take on a personality you can relate to. Mahoney gives the example of robotic pets for comforting the elderly. There are two machines in our household that have inspired a feeling of anthropomorphism in me, such that they have names: ‘Rosie’, the robotic vacuum cleaner, and ‘Super-nanny’, the car. Both have voices – Rosie communicates
with simple phrases like ‘Please select mode’ and ‘Check side brush’. Supernanny provides navigational guidance in a crisp British accent.
Question 4. Are there any machines in your household that you have named? Do you think they are an example of AI?
Artificial General Intelligence
In his article, Mahoney draws the distinction between specialised AI, in which machines can perform one function that was previously thought to be possibly only for humans, and Artificial General Intelligence, in which a machine can perform any intellectual task a human can.
How will we know when we have achieved this? As early as 1950, Alan Turing described a test for AGI – if a human can hold a conversation with a machine for five minutes, and 70% of the time is persuaded that the machine is human, then the machine is said to have passed the test. This test is now known as the Turing Test. In 2014 a program called Eugene Goostman, which simulates a 13-year-old Ukrainian boy, was able to fool 33% of the judges after 5 minutes of conversation at an event organised by the University of Reading. Hugh Loebner, an American inventor, has offered a controversial prize for the first program to pass his more rigorous Turing Test. That prize is yet to be claimed (http://bit.ly/2dj4Fci).
Mahoney warns about some of the dangers of AGI. In this he keeps very good company – Stephen Hawking, Elon Musk and Bill Gates have also been outspoken about the risks. But Mahoney leaves unexplored a question of interest to Humanists:
Question 5. Is it ethical from a Humanist perspective, to be striving to develop AGI?
‘It has been reported that because ASIMO’s walk is so eerily human-like, Honda engineers felt compelled to visit the Vatican just to make sure it was okay to build a machine that was so much like a human. (The Vatican thought it was okay.)’ – http://bit.ly/2dqzVXb
And more on ethics and AI: Navy researchers are working developing a robot with a sense of morality (http://bit.ly/2ctEAFH).
The dangers of AI in fiction:
- Terminator, 1984–2015 film series, James Cameron and Gale Anne Hurd.
- Battlestar Galactica. Several TV Series and films, 1978–2013, Creator Glen A. Larson.
- The Matrix, 1999– 2003 film series, dir. The Wachowski Brothers.
- Blade Runner, 1982, dir. Ridley Scott, based on novel by Philip K. Dick.
- 2001 a Space Odyssey, 1968 film dir. Stanley Kubrick, based on novel by Arthur C. Clarke.
Another question of interest to Humanists that Mahoney has left unasked, is this: What if we did develop a machine indistinguishable from a human?
Question 6. How should we, as Humanists, treat such a machine?
Fiction that explores the rights of machines that can think:
- Fallout 4. 2015 immersive computer game, Bethesda Game Studios.
- Humans. 2015 TV Series, AMC, Channel 4 and Kudos co-production.
- Ex Machina. 2015 film, dir. Alex Garland.
- Her. 2013 film, dir. Spike Jonze.
- A.I. Artificial Intelligence. 2001 film, dir. Steven Spielberg.
- Bicentennial Man. 1999 film, dir. Chris Columbus.