News & Media
Neuroscience 101: When will we have thinking machines?
"Why are there so many robots in fiction, but none in real life?" asked Stephan Pinker in How the Mind Works. Pinker wasn't talking about the robots that fasten rivets in automobile assembly lines. He was referring to robots like C3PO in Star Wars and Data in Star Trek: The Next Generation - sentient beings with thoughts, feelings and personalities. Pinker's answer to his rhetorical question was, essentially, that there are real no robots because no machine could - at least in 1998, when the book was published - come close to achieving the remarkable capabilities of human perception, cognition and action. But computer hardware and software have come a long way since then. My $500 digital video camera recognizes faces, and my home computer can translate my spoken words into written text. The gap between what humans can do and what machines can do is narrowing. So, how long before there really are machines with minds as good as or better than those of human beings? This is really two separate questions: How do we build a thinking machine, and how will we know when we've succeeded?
With respect to the first question, the broad consensus among investigators in neuroscience, psychology and artificial intelligence is that the brain is a computational or "information processing" organ and that thinking is a computational process. Human brains have a radically different design than modern digital computers; nevertheless, if, as is widely believed, human thinking is a form of computation, then it should be possible to emulate this on a digital computer of sufficient power, given the right software. The question of how much computational power we'll need is addressed by the author and futurist Ray Kurzweil, in his book The Singularity is Near. Kurzweil, drawing on various lines of evidence, proposes that the computational capacity of the human brain is equivalent to around 1016 (10 thousand trillion) computations per second and that memory capacity is around 1014 (100 trillion) bits. The most powerful modern computers are approaching these capabilities, and Kurzweil projects that computer hardware capable of emulating the human brain will be available for $1,000 US by the year 2020. (According to Kurzweil's projection, by 2050, $1000 will buy computing power greater than all the human brains on earth!)
Although the hardware necessary to emulate the human brain may be available within the next decade, reverse-engineering the software of the human brain is a much more difficult problem. Modern digital computers contain one or a few central processors that operate at very high speeds, with central processors of commercially available laptops operate at around a trillion operations per second. The brain, on the other hand, operates by a radically different design. The nerve cells of the brain process information very slowly compared to computers, with the fastest brain neurons convey information at rates of up to a few hundred impulses per second. The brain gets its computational power from its massive parallelism, which means that it can carry out a large number of computations simultaneously. Brain processes are also noisy and inaccurate. These are not flaws; they are design features, which presumably explain why brains are so good at some things that computers are bad at and vice versa. Although we have learned a great deal about brain function over the past few decades, the fundamental logic of the brain - how it processes, stores and retrieves information, and how it’s various subsystems interact to produce perception, cognition and movement - is still poorly understood.
So, what about the second problem of artificial intelligence: How will we know when we've made a machine that can think? The most famous answer to this question was devised by the great British mathematician Alan Turing (1912 - 1954) in his 1950 paper entitled Computing Machinery and Intelligence. Turing originally called his intelligence test "the imitation game" but it has come to be called the Turing test. In the test, a human "judge" interrogates two participants, one a living, breathing human being and the other a machine located in a separate room. Both participants, through their responses, try to convince the judge that they are human. In Turing's day the conversation between judge and participants would have been mediated by scraps of paper while today we would use email, text messaging or Twitter. In either case, if the judge can't tell which of the participants is human and which is machine then, according to Turing, the machine can think.
Turing's challenge has provided the ultimate measure of success for a cottage industry of computer programs called chatbots, which are designed to carry on human-like conversations with a human interlocutor. Many of these chatbots are available on the internet. They're fun to play with and quite remarkable, but not yet good enough to fool all the people all the time. (I recently performed the Turing test with an auditorium full of students, using one of the best-known chatbots, and the students were easily able to tell the computer from the human.) In 1990, Hugh Loebner established a $100,000 prize for the first chatbot to pass the Turing test. The prize remains uncollected, although the chatbots get better and better with each year's competition.
A final thought about thinking machines and the Turing test; they both rest on the assumption that human cognition is a computational process that can be emulated on a sufficiently powerful digital computer, running the right kind of software. This is the overwhelmingly predominant view in science, but there are a few vocal dissenters. The most famous argument against the idea that human thinking can be reproduced on a computer is a thought experiment called the Chinese Room, invented by the philosopher John Searle. Searle asks us to imagine that he is in a small room. Cards with squiggles on them are slid under the door, and by following written instructions, Searle responds by passing cards with other squiggles out through the same opening. Now imagine that these squiggles are in fact Chinese characters, and Searle (the computer) through the instructions (the program) is carrying on a conversation in Chinese. "But," says Searle, "I don't understand Chinese!" The argument is a bit more elaborate than this, but his essential contention is that a computer, following instructions, cannot "understand" what it is doing. It's not "thinking"; it's just blindly following commands.
Searle's argument is a clever appeal to our common sense, but it is dismissed by most brain scientists (for reasons that are too complicated to get into here). Nevertheless, it should at least remind us that there is a history of explaining the mind in terms of whatever available technology is state-of-the-art. One hundred years ago it was hydraulic: psychic "pressure" built up and had to be "released". Today it's computational. Since we still really don't understand how the brain works, perhaps it is wise to keep a degree of scepticism concerning currently fashionable assumptions about the relation between brain and mind.
David Ragsdale, Assistant Professor in the Department of Neurology and Neurosurgery, teaches neuroscience and muscle physiology each year to more than 1000 students in the human physiology course, as well as philosophy and neuroscience to a more exclusive group of 25. In his teaching, David likes to distill complex topics to the few key ideas that are accessible to students, and is impressed by the quality of the McGill students he meets. The students are also impressed with David’s approach and he has frequently been recognized by them for excellence in teaching.