Ever find yourself staring out into a natural landscape, almost intoxicated by the immediate sensory experience, and suddenly find yourself ruminating thoroughly within your inner world? This happens to me a lot, and when I came upon views like this one along the cascade trail in the Jefferson National Forest in Pembroke, Virginia, I couldn’t seem to avoid drifting off introspectively all along the winding path leading to the Cascade Falls. Whenever these experiences occur, I often find myself trying to figure out just what it is about our human nature that provides me with such a richness and depth of compelling experience WITHIN…simply by being able to perceive the natural world.
Reading an article on the state of artificial intelligence this morning in the Wall Street Journal, I started thinking about the differences between the nature of that intelligence and the human variety, and decided to probe for myself the extent to which the artificial variety has become a part of my daily experience. Since there are a number of devices and services involved in most of our daily lives these days, which contain components and sources that rely on basic artificial intelligence principles in order to serve our needs and to function in real time, I wondered to what degree that presence was truly evident and useful.
The article in the CIO Journal blog by Tom Loftus talked about the difference between AI smart and human smart:
“To me, I think the fundamental issue is what I call deep understanding versus shallow understanding,” said Charles Elkan, managing director and global head of machine learning at Goldman Sachs. Shallow understanding is the ability to answer a limited range of questions that are similar to each other, he said. Deep understanding, he said, implies broad context and broad knowledge. “The entire spectrum of current algorithms that we know for AI are not going to scale to human intelligence, let alone super-intelligence.”
Since I am typing these words on my laptop computer, it is immediately apparent that my thoughts are being immediately recorded, constantly edited, and eventually refined to my satisfaction (usually) through a reasonably and artificially intelligent machine, which is connected wirelessly to a network device that is connected to a router permitting access to a world-wide web of other computers and routing devices, all seamlessly communicating and providing feedback and information from an array of global resources, all while sitting on my sofa having my morning coffee, which was prepared automatically by my electronic coffee maker, in a room which has a voice activated Alexa, Bluetooth speakers, and a “smart” television, all capable of being connected to my “smartphone,” through which I can control, in significant ways, all of the other devices.
It hardly requires more than a passing familiarity with our 21st century technology to be able to conclude that, in a big way, some aspect of artificial intelligence has made virtually all of this activity possible. With a few keystrokes on my laptop, or the tapping of a few icons on my smartphone, I can connect and engage each of these devices to perform an astonishing array of functions, including a particularly interesting version of conversation with a lovely female sounding, voice-activated device with an apparent capability to understand, to an impressive degree, words and inquiries initiated by me.
In the spirit of intellectual curiosity, I decided to attempt to press the Alexa to push the boundaries of the device’s “intelligence,” to see if I might be able to get it to “make a connection,” between ideas that I bring up in a logical sequence. My first question asked about the definition of artificial intelligence:
Alexa, what is artificial intelligence?
“Artificial intelligence is usually defined as the branch of computer science that deal with writing computer programs that can solve problems creatively.”
(I noticed right away that Alexa made a grammatical error in saying “deal” instead of “deals.”)
Alexa, what is deep learning?
“Deep learning is part of a broader family of machine learning methods based on learning data representations as opposed to task-specific algorithms.”
Alexa, what is an algorithm?
“Algorithm is usually defined as a precise rule or set of rules, specifying how to solve some problem.”
Alexa, give me an example of an algorithm.
“Constraint is a random algorithm.”
Alexa, what is a constraint algorithm?
“In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points.”
While this exchange was interesting in view of interacting with a device using only my voice, and considering that Alexa’s responses to my questions generally were appropriate, the moment I went beyond the simple formulation of asking, “what is,” Alexa quickly resorted to the default response of “I’m not sure,” and “I don’t know the answer to that.” The impressive range of information at Alexa’s disposal, and the ability to respond to specific questions are both useful if you are doing a search, or want to know how to spell a word, but if you want to converse or ask further questions about the material provided, you will be disappointed. I told Alexa that I wanted to have a conversation, and was surprised to hear the device suggest that I engage “a social bot.”
Once I engaged this option, I was momentarily encouraged by the more conversational tone of the words that issued from the speaker. For a moment, the words seemed almost spontaneous, until, instead of saying, “yes,” to a suggestion, I responded with “sure.” Alexa’s chatty response was, “Oh, this is embarrassing, I’m not sure how to respond to what you said…would you like to talk about something else?” It quickly became apparent that there were a limited number of responses that I could give, and that my responses had to conform to a particular pattern or the device would revert to the default suggestion that we talk about something else. The responses became a little longer and there would occasionally be a humorous interjection when the exchange reached its limit, but I tired fairly quickly of having to conform to a formula in order to continue the exchange.
As interesting as even these limited options are in the context of “talking” to a machine, it is painfully obvious that there is really “no one home;” no “ghost in the machine.” There is a clear distinction between my interest in a conversation, and the machines ability to participate in any meaningful way. It is still a practical and interesting way to interact with an information source, and the advantages these devices provide are often surprising.
In the kitchen, there is another device called the “Echo Show,” which utilizes the same algorithms and methods as Alexa, but has the added advantage of being able to provide video results when the opportunity presents itself. When installing the device initially, the default “wake up” word was also Alexa, which immediately caused both devices to respond simultaneously, so I had to change the “wake up” word to “Echo,” in order not to engage the Alexa device at the same time. The advantages of having the option to display a variety of video responses is a clear enhancement of the original concept, but the option to conduct face-to-face video messaging and calls with others who have an Echo device requires giving access to contacts and phone numbers, and currently that seems a bit beyond my comfort zone regarding digital privacy and sharing.
The stark differences between the artificial intelligence supporting the machine and the natural intelligence I was using to interact with it, points to one of the key elements in discussing the potentials inherent in the future of AI development. No matter how sophisticated the sensors and algorithms or models of deep learning become, perception and processing alone are insufficient to produce the ability for introspection or the “inner sense” we experience as living biological and sentient beings. The Stanford Encyclopedia of Philosophy expresses it this way:
“Perception is achieved through dedicated organs such as eyes and ears, whereas there is no (literal) organ of introspection. “The ‘organ’ of introspection is attention, the orientation of which puts a subject in an appropriate relation to a targeted state” (Goldman 2006: 244). Perception ordinarily involves sensory experiences, whereas “No one thinks that one is aware of beliefs and thoughts by having sensations or quasi-sense-experiences of them” (Shoemaker 1994: 255). – Stanford Encyclopedia of Philosophy
All conscious humans with a nominally functional cognitive apparatus (brain, central nervous system, with basic life supporting and sensory systems intact) combined with sufficient life experience, and at least a minimal ability with language, eventually will acquire a degree of “inner sense experience,” which permits and accounts for our ability for introspection, which differs from simple perception in significant ways.