Since there is so much conversation going on these days about Artificial Intelligence and what we might expect in the coming years as scientists and researchers advance in constructing ever-more complex machines, I thought it might be a good time to consider not only what it means to be “intelligent,” but also what importance the term “artificial” carries with it when using the two terms together in a sentence. In recent years, cognitive scientists and AI researchers have made significant progress in producing machines which can perform specific tasks and demonstrate specialized capacities for accomplishing remarkable feats of machine intelligence, and in very specific ways, have outperformed humans in circumstances which previously were thought to be beyond such artificial constructs.
While all of the hoopla and publicity surrounding such events generally results in hyperbole and sensational headlines, there is a degree of fundamental achievement underneath it all that warrants our attention and could be described as commendable in the context of modern scientific research. Most media consumers and television viewers have encountered the commercials for IBM’s Watson, and have likely been exposed to reports of Watson’s abilities and accomplishments. There is much to admire in the work that resulted in the existence of such a system, and the benefits are fairly straightforward as presented by the advertisements, although it is also clear that they have been designed to feature what might be the most benign and easy-to-understand characteristics of a system which accomplishes its tasks using artificial intelligence. Much of the underlying science, potential risks, and limits of such research are rarely discussed in such ads.
In order to make some kind of sense of it all, and to think about what it is exactly that is being accomplished with artificial intelligence, what forces and processes are being employed, and how the results compare to other cognitive achievements, especially as it relates to human intelligence and human cognitive processes, we have to understand something about the most important differences between a system like Watson, and the cognitive processes and brain physiology of modern humans. While some stunning similarities exist between the basic architecture of neural networks in the brain and modern AI devices, not a single project currently being undertaken is anywhere near the goal of rising to an equivalent level of general capability or even just achieving a basic understanding what it takes to create a human mind. It’s not that it’s an impossible undertaking, nor is it impossible to imagine how human minds might eventually make great leaps in both constructing advanced systems and in making progress toward a greater level of understanding. After all, the human mind is pretty stunning all by itself!
What is most discouraging from my point of view is how much emphasis is being placed on the mechanics of intelligence–the structural underpinning of physical systems–instead of including a more holistic and comprehensive approach to increasing our understanding. A recent article in the Wall Street Journal by Yale University computer science professor, David Gelernter, (Review, March 19-20, 2017) posits that “…software can simulate feeling. A robot can tell you it’s depressed and act depressed, although it feels nothing.” Whether or not this approach might bring us closer to “machines that can think and feel,” successfully doing so seems like a long shot. If all we can do is “simulate” a human mind, is that really accomplishing anything?
Professor Gelernter goes to great lengths to describe the levels of a functional human mind, and gives us valuable insights into the way our own minds work, and he illuminates the way we shift between levels of awareness, as well as how we make such good use of our unique brand of intelligence. He then suggests that AI could create these same circumstances in a “computer mind,” and that it could “…in principle, build a simulated mind that reproduced all of the nuances of human thought, and (which) dealt with the world in a thoroughly human way, despite being unconscious.” He takes great pains to enumerate all the ways in which the “spectrum” of a human mind operates, and then concludes that “Once AI has decided to notice and accept this spectrum–this basic fact about the mind–we will be able to reproduce it in software.”
We cannot reduce what it means to feel to the astonishingly complex machinations of the human brain, any more than we can boil down the complexity of the human brain to the point where an artfully written piece of software can recreate anything even close to human feelings–what it actually feels like to be a living, breathing, cognitive human being. As Hamlet explains to Horatio, “There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.” Shakespeare’s intimation on the limitations of even human thought should give us pause to consider the limitations of producing it artificially.
—more to come—
Qualia (refers to) what it means to feel–what it is like to experience first hand.
Nancy,
Thanks for your visit and for your comment. In philosophy, the term “qualia” generally does refer to the phenomenal properties of experience, and it is useful as a descriptive term and as a means of describing “what it is like” to experience the red of an apple for example. There is something that it is like to see a red apple, which is different from what it is like seeing a yellow daffodil. Each of us experiences our existence subjectively in a way that is, very likely, profoundly unique to each of us, but as cognitive and sentient human beings, we can surmise that there are at least similarities in “what it is like” to have a particular experience. Two individuals clearly can have a very similar experience in a number of ways with regard to the qualitative nature of that experience, like riding on the same roller coaster at the same time, but there is a great deal more that each individual brings to any experience, which can profoundly change the quality of that experience, and what it “feels” like to them. Our individual perspective doesn’t always alter the quality of our experience in obvious ways, but since it is subjectively perceived, only the person having an experience can know definitively how it “feels” to them.
The Internet Encyclopedia of Philosophy, “…founded in 1995 to provide open access to detailed, scholarly information on key topics and philosophers in all areas of philosophy,” and which maintains a “…staff of 30 editors and approximately 300 authors (who) hold doctorate degrees and are professors at universities around the world,” has an excellent review of the current scholarship regarding the use of the term “qualia,” and I recommend it for anyone interested in additional discussion on the subject.
One particularly relevant quotation struck me as pertinent to your comment:
“From the standpoint of introspection, the existence of qualia seems indisputable. It has, however, proved remarkably difficult to accommodate qualia within a physicalist account of the mind. Many philosophers have argued that qualia cannot be identified with or reduced to anything physical, and that any attempted explanation of the world in solely physicalist terms would leave qualia out. Thus, over the last several decades, qualia have been the source of considerable controversy in philosophy of mind.”
I appreciate your interest in my post and will be expanding on this idea of “what it means to feel.” –John H.