Perception and Introspection

Ever find yourself staring out into a natural landscape, almost intoxicated by the immediate sensory experience, and suddenly find yourself ruminating thoroughly within your inner world? This happens to me a lot, and when I came upon views like this one along the cascade trail in the Jefferson National Forest in Pembroke, Virginia, I couldn’t seem to avoid drifting off introspectively all along the winding path leading to the Cascade Falls. Whenever these experiences occur, I often find myself trying to figure out just what it is about our human nature that provides me with such a richness and depth of compelling experience WITHIN…simply by being able to perceive the natural world.

Reading an article on the state of artificial intelligence this morning in the Wall Street Journal, I started thinking about the differences between the nature of that intelligence and the human variety, and decided to probe for myself the extent to which the artificial variety has become a part of my daily experience. Since there are a number of devices and services involved in most of our daily lives these days, which contain components and sources that rely on basic artificial intelligence principles in order to serve our needs and to function in real time, I wondered to what degree that presence was truly evident and useful.

The article in the CIO Journal blog by Tom Loftus talked about the difference between AI smart and human smart:

“To me, I think the fundamental issue is what I call deep understanding versus shallow understanding,” said Charles Elkan, managing director and global head of machine learning at Goldman Sachs. Shallow understanding is the ability to answer a limited range of questions that are similar to each other, he said. Deep understanding, he said, implies broad context and broad knowledge. “The entire spectrum of current algorithms that we know for AI are not going to scale to human intelligence, let alone super-intelligence.”

Since I am typing these words on my laptop computer, it is immediately apparent that my thoughts are being immediately recorded, constantly edited, and eventually refined to my satisfaction (usually) through a reasonably and artificially intelligent machine, which is connected wirelessly to a network device that is connected to a router permitting access to a world-wide web of other computers and routing devices, all seamlessly communicating and providing feedback and information from an array of global resources, all while sitting on my sofa having my morning coffee, which was prepared automatically by my electronic coffee maker, in a room which has a voice activated Alexa, Bluetooth speakers, and a “smart” television, all capable of being connected to my “smartphone,” through which I can control, in significant ways, all of the other devices.

It hardly requires more than a passing familiarity with our 21st century technology to be able to conclude that, in a big way, some aspect of artificial intelligence has made virtually all of this activity possible. With a few keystrokes on my laptop, or the tapping of a few icons on my smartphone, I can connect and engage each of these devices to perform an astonishing array of functions, including a particularly interesting version of conversation with a lovely female sounding, voice-activated device with an apparent capability to understand, to an impressive degree, words and inquiries initiated by me.

In the spirit of intellectual curiosity, I decided to attempt to press the Alexa to push the boundaries of the device’s “intelligence,” to see if I might be able to get it to “make a connection,” between ideas that I bring up in a logical sequence. My first question asked about the definition of artificial intelligence:

Alexa, what is artificial intelligence?

“Artificial intelligence is usually defined as the branch of computer science that deal with writing computer programs that can solve problems creatively.”

(I noticed right away that Alexa made a grammatical error in saying “deal” instead of “deals.”)

Alexa, what is deep learning?

“Deep learning is part of a broader family of machine learning methods based on learning data representations as opposed to task-specific algorithms.”

Alexa, what is an algorithm?

“Algorithm is usually defined as a precise rule or set of rules, specifying how to solve some problem.”

Alexa, give me an example of an algorithm.

“Constraint is a random algorithm.”

Alexa, what is a constraint algorithm?

“In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points.”

While this exchange was interesting in view of interacting with a device using only my voice, and considering that Alexa’s responses to my questions generally were appropriate, the moment I went beyond the simple formulation of asking, “what is,” Alexa quickly resorted to the default response of “I’m not sure,” and “I don’t know the answer to that.” The impressive range of information at Alexa’s disposal, and the ability to respond to specific questions are both useful if you are doing a search, or want to know how to spell a word, but if you want to converse or ask further questions about the material provided, you will be disappointed. I told Alexa that I wanted to have a conversation, and was surprised to hear the device suggest that I engage “a social bot.”

Once I engaged this option, I was momentarily encouraged by the more conversational tone of the words that issued from the speaker. For a moment, the words seemed almost spontaneous, until, instead of saying, “yes,” to a suggestion, I responded with “sure.” Alexa’s chatty response was, “Oh, this is embarrassing, I’m not sure how to respond to what you said…would you like to talk about something else?” It quickly became apparent that there were a limited number of responses that I could give, and that my responses had to conform to a particular pattern or the device would revert to the default suggestion that we talk about something else. The responses became a little longer and there would occasionally be a humorous interjection when the exchange reached its limit, but I tired fairly quickly of having to conform to a formula in order to continue the exchange.

As interesting as even these limited options are in the context of “talking” to a machine, it is painfully obvious that there is really “no one home;” no “ghost in the machine.” There is a clear distinction between my interest in a conversation, and the machines ability to participate in any meaningful way. It is still a practical and interesting way to interact with an information source, and the advantages these devices provide are often surprising.

In the kitchen, there is another device called the “Echo Show,” which utilizes the same algorithms and methods as Alexa, but has the added advantage of being able to provide video results when the opportunity presents itself. When installing the device initially, the default “wake up” word was also Alexa, which immediately caused both devices to respond simultaneously, so I had to change the “wake up” word to “Echo,” in order not to engage the Alexa device at the same time. The advantages of having the option to display a variety of video responses is a clear enhancement of the original concept, but the option to conduct face-to-face video messaging and calls with others who have an Echo device requires giving access to contacts and phone numbers, and currently that seems a bit beyond my comfort zone regarding digital privacy and sharing.

The stark differences between the artificial intelligence supporting the machine and the natural intelligence I was using to interact with it, points to one of the key elements in discussing the potentials inherent in the future of AI development. No matter how sophisticated the sensors and algorithms or models of deep learning become, perception and processing alone are insufficient to produce the ability for introspection or the “inner sense” we experience as living biological and sentient beings. The Stanford Encyclopedia of Philosophy expresses it this way:

“Perception is achieved through dedicated organs such as eyes and ears, whereas there is no (literal) organ of introspection. “The ‘organ’ of introspection is attention, the orientation of which puts a subject in an appropriate relation to a targeted state” (Goldman 2006: 244). Perception ordinarily involves sensory experiences, whereas “No one thinks that one is aware of beliefs and thoughts by having sensations or quasi-sense-experiences of them” (Shoemaker 1994: 255). – Stanford Encyclopedia of Philosophy

All conscious humans with a nominally functional cognitive apparatus (brain, central nervous system, with basic life supporting and sensory systems intact) combined with sufficient life experience, and at least a minimal ability with language, eventually will acquire a degree of “inner sense experience,” which permits and accounts for our ability for introspection, which differs from simple perception in significant ways.

Artificial Intelligence and Human Life

Fifty-two prominent researchers on intelligence, agreed to a broad definition of the term, “Intelligence:”

“Intelligence is a very general capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test‑taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—‘catching on’, ‘making sense’ of things, or ‘figuring out’ what to do. Intelligence, so defined, can be measured, and intelligence tests measure it well”

–Gottfredson, L. S. Mainstream science on intelligence: excerpt from an editorial with 52 signatories, history, and bibliography. Intelligence 24, 13–23 (1997).

Intelligence of the artificial variety, if it is ever to be considered on a par with the human variety, should then include each of these abilities, as well as the capabilities for comprehension, “catching on,” etc. A recent film about this very subject has captured some very important aspects of concern, supposing that there is some sort of breakthrough eventually that creates what might be described as a “conscious machine.”

“Ex Machina,” the 2015 Universal Studio film, directed by Alex Garland, starring Domhnall Gleeson, Alicia Vikander, and Oscar Isaac, tells the story of a reclusive billionaire genius who owns the largest search engine company in the world, who has built a research facility in a remote mountain setting for the purpose of building an artificially intelligent robot, with the expressed goal of passing the well-known “Turing Test,” for determining if the “machine” is self-aware. As the film opens, Caleb, an employee of the high-tech firm, has won a lottery drawing within the company to visit the CEO, Nathan, at his research station and, as a result, has the opportunity to test the A.I. to see if it is truly “self-aware.” If Nathan has succeeded, he claims that it would be “the most important event in the history of man.” Domhnall Gleeson’s character corrects him by describing it as “the most important event in the history of gods.”

We are immediately thrown into the astonishing world of the newly “born” A.I., Ava, and by virtue of the design of a special high-tech suit, Alicia Vikander’s character appears to be constructed of wires and metal bones, illuminated by a variety of internal lights, and covered strategically by patches of flesh-like “skin,” allowing for the display of facial movements, and to give “Ava,” a basic human appearance. The internal workings are visible enough to suggest how the robot functions, while still providing the basic contours of the human form. It is an accomplished display of special effects which are both astonishingly realistic and profoundly disturbing at the same time. The contrast is designed to be unsettling to the moviegoer—to draw us in and to shock us into believing that it could be accomplished.

If you haven’t seen the film, it is a powerfully compelling story, and I recommend it wholeheartedly as a morality tale, which begs the question of how we would have to treat such entities should they actually qualify as being self-aware, as well as a serious warning about what might happen if we don’t get it right. The character of the robot’s creator, Nathan, clearly isn’t sufficiently cautious regarding the implications of bringing a self-aware robot “online,” and he seems callous and narcissistic as an eccentric billionaire genius.

Story elements aside, many of which were designed to create drama and provide tension, the underlying implications of the circumstances surrounding such an endeavor gave me pause to consider why any future human being capable of such a feat would even want to dabble in such an undertaking in the first place. Regardless of the level of extraordinary intelligence required, bringing such an entity into existence would also require just the right balance of human decency, compassion, and empathy, coupled with profound and penetrating neuroscientific acumen. While the technological and scientific principles supporting such an invention would be of great interest to artificial intelligence advocates generally, and those who would stand to benefit financially and otherwise would have an understandable motive to see it through, the actual created entity itself would present humanity with the most challenging and perplexing dilemma it could ever face—how to know if it would turn out to be a powerfully beneficial scientific breakthrough, or the eventual instrument of our own obsolescence!

At this point in human evolution, the possibility of constructing anything even close to the self-aware robot we meet in the film seems, on the face of it, to be a very unlikely development for a number of reasons. Throughout the film, we are presented with brief glimpses of the architecture and underlying technologies which provide the foundation for how such an entity might be constructed and assembled to achieve the desired result of the project, and none of those elements exist currently in any form even resembling in the slightest degree that which would be necessary for accomplishing this enormously complex task. Using even the most sophisticated and powerful computers known to humanity, we can barely reach a level of AI that even just approximates the sophistication of the most basic nervous system of the most minimally sentient creature.

Several projects being undertaken to “mimic” the human brain, using our most promising approaches for “deep learning,” and the giant “supercomputers” like IBM’s Watson, are simply nowhere near being able to reproduce anything resembling even a fraction of the innate capabilities that our own three pound squishy mental organ can manage, with its trillions of connections inside our exquisitely shaped and evolutionarily designed skulls. This inheritance of the long evolutionary path of modern primates provided Homo sapiens with a distinctly and uniquely capable cognitive system, which exists (so far as we know) only within human beings, and consists of the most complex arrangement of neural networks of any known species. It is presumptuous indeed to suppose that any artificial system might one day exist, which could recreate precisely, that which now exists within us, possessing the same character and quality of a living, breathing, sentient modern human.

Even the tiniest quantum “neurons,” represented by the atomic scale of the components proposed by the advent of quantum computers, require supporting technologies that would seriously prohibit squeezing them into a space as small as the human skull. The character of Ava, portrayed unflinchingly in the film by Alicia Vikander, has so many affectations of modern humans, and is intriguing beyond any expectation of her creator or her Turing tester, that we easily get caught up in suspending our knowledge that no such creature currently exists. The interplay between Caleb and Ava reaches a fever pitch eventually, and we are compelled to hang on to the edge of our seats as the drama unfolds.

It is well worth the investment of the resources available to produce sophisticated and intelligent machines, and I’m not suggesting that we abandon artificial intelligence research and development. Many of the films which attempt to portray what might take place in a world where such inventions exist, often only offer a superficial portrayal of the opposing characters, glossing over the significant differences between artificial machines and sentient living humans. In the film, “Ex Machina,” the contrast is absolutely startling, as both human and machine present a potent display of the limits of both the technology and our human understanding of what makes us truly self-aware.

What it usually boils down to is whether or not the film makers believe consciousness is a product of brain physiology—whether it “emerges” out of the firing of neurons and the electrochemical processes defined by neuroscience, or instead exists as a phenomenon of indeterminate origin which is made available to us by virtue of possessing “the right stuff,” –a sufficiently complex cognitive organ.

Any attempt to reduce the complexity and holistic phenomenal experience of consciousness to simply putting together enough neurons in the right arrangement and coordinating systems and functions in just the right manner, seriously underestimates not only the phenomenon itself, but fails to take into account the awesome and sometimes mysterious character of our humanity. Human nature and nurture won’t ever be truly obsolete, as long as we continue to appreciate the supreme value of human life, and acknowledge with gratitude, our awareness of our subjective experience of existing as complex sentient beings. We are imperfect creatures who often don’t understand or appreciate fully how miraculous it is to be a participant in the experience of life on Earth, and we cannot expect any artificial “life” to be anything other than a reflection of the moral character and scientific competence of its creator.

What It Means To Feel

Since there is so much conversation going on these days about Artificial Intelligence and what we might expect in the coming years as scientists and researchers advance in constructing ever-more complex machines, I thought it might be a good time to consider not only what it means to be “intelligent,” but also what importance the term “artificial” carries with it when using the two terms together in a sentence. In recent years, cognitive scientists and AI researchers have made significant progress in producing machines which can perform specific tasks and demonstrate specialized capacities for accomplishing remarkable feats of machine intelligence, and in very specific ways, have outperformed humans in circumstances which previously were thought to be beyond such artificial constructs.

While all of the hoopla and publicity surrounding such events generally results in hyperbole and sensational headlines, there is a degree of fundamental achievement underneath it all that warrants our attention and could be described as commendable in the context of modern scientific research. Most media consumers and television viewers have encountered the commercials for IBM’s Watson, and have likely been exposed to reports of Watson’s abilities and accomplishments. There is much to admire in the work that resulted in the existence of such a system, and the benefits are fairly straightforward as presented by the advertisements, although it is also clear that they have been designed to feature what might be the most benign and easy-to-understand characteristics of a system which accomplishes its tasks using artificial intelligence. Much of the underlying science, potential risks, and limits of such research are rarely discussed in such ads.

In order to make some kind of sense of it all, and to think about what it is exactly that is being accomplished with artificial intelligence, what forces and processes are being employed, and how the results compare to other cognitive achievements, especially as it relates to human intelligence and human cognitive processes, we have to understand something about the most important differences between a system like Watson, and the cognitive processes and brain physiology of modern humans. While some stunning similarities exist between the basic architecture of neural networks in the brain and modern AI devices, not a single project currently being undertaken is anywhere near the goal of rising to an equivalent level of general capability or even just achieving a basic understanding what it takes to create a human mind. It’s not that it’s an impossible undertaking, nor is it impossible to imagine how human minds might eventually make great leaps in both constructing advanced systems and in making progress toward a greater level of understanding. After all, the human mind is pretty stunning all by itself!

What is most discouraging from my point of view is how much emphasis is being placed on the mechanics of intelligence–the structural underpinning of physical systems–instead of including a more holistic and comprehensive approach to increasing our understanding. A recent article in the Wall Street Journal by Yale University computer science professor, David Gelernter, (Review, March 19-20, 2017) posits that “…software can simulate feeling. A robot can tell you it’s depressed and act depressed, although it feels nothing.” Whether or not this approach might bring us closer to “machines that can think and feel,” successfully doing so seems like a long shot. If all we can do is “simulate” a human mind, is that really accomplishing anything?

Professor Gelernter goes to great lengths to describe the levels of a functional human mind, and gives us valuable insights into the way our own minds work, and he illuminates the way we shift between levels of awareness, as well as how we make such good use of our unique brand of intelligence. He then suggests that AI could create these same circumstances in a “computer mind,” and that it could “…in principle, build a simulated mind that reproduced all of the nuances of human thought, and (which) dealt with the world in a thoroughly human way, despite being unconscious.” He takes great pains to enumerate all the ways in which the “spectrum” of a human mind operates, and then concludes that “Once AI has decided to notice and accept this spectrum–this basic fact about the mind–we will be able to reproduce it in software.”

We cannot reduce what it means to feel to the astonishingly complex machinations of the human brain, any more than we can boil down the complexity of the human brain to the point where an artfully written piece of software can recreate anything even close to human feelings–what it actually feels like to be a living, breathing, cognitive human being. As Hamlet explains to Horatio, “There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.” Shakespeare’s intimation on the limitations of even human thought should give us pause to consider the limitations of producing it artificially.

—more to come—

An Artificial Version of Human Intelligence?

artificial-intelligence22

According to Webster’s Unabridged Dictionary, intelligence is defined as:

noun
1. capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.
2. manifestation of a high mental capacity: “He writes with intelligence and wit.”

In a recent study conducted at the University of Western Ontario, researchers acknowledged the limitations of current scientific research, but offered a basis for suggesting factors to consider. They “looked into the brain areas that are activated by tasks that are typically used to test for intelligence,” and reported their results–

“…based on the set of brain areas that might contribute to those tasks. However don’t get too excited, the methods used have severe limitations and we are still only at the hypothesis level. We do not know how these areas contribute to performance in intelligence tests and we do not know why they are activated and how they interact together to create the behavior.”

http://blog.brainfacts.org/

According to a recently published neuroscientific paper, “a broader definition was agreed to by 52 prominent researchers on intelligence:”

“Intelligence is a very general capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test‑taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—‘catching on’, ‘making sense’ of things, or ‘figuring out’ what to do. Intelligence, so defined, can be measured, and intelligence tests measure it well.”

Reviewing the many related brain structures involved in cognitive functioning, researchers concluded that:

“…variations in these structures and functions may be “endophenotypes” for intelligence — that is, they might be intermediate physiological markers that contribute directly to intelligence. Therefore, genes involved in intelligence might be more closely linked to these variations in brain structure and function than to intelligence itself. In fact, in all studies to date, the genetic influences on these structures and functions were highly correlated with those on general intelligence.”

–excerpts from “The Neuroscience of Human Intelligence Differences,” by Ian J.Deary, Lars Penke and Wendy Johnson

There are a number of individuals today who are beginning to make associations between the technological advances of modern science and some of the ancient esoteric traditions like yoga, in an attempt to explain our subjective experience of consciousness:

“If hypothetical machinery inside neurons fails to explain qualia, (the ‘what-it’s-like’ quality of experience) must we then consider the molecules that make up the neuronal machinery, or the atoms inside the molecules, or the subatomic particles inside the atoms? Where is the difference that causes the qualia of subjective experience? A less problematic explanation is possible. German scientist, Gottfried Leibniz, postulated irreducible quanta of consciousness he termed ‘monads.’ Matter does not create consciousness. Instead, matter is animated by monads. It seems hardly a coincidence that Leibniz’ monads would perfectly fit between the moments of time that lead to Kaivalya, (Yoga term for enlightenment or nirvana.)

Ultimately, Kaivalya is an ineffable experience. But the claim of yoga is that it provides means to experience what is outside of the individualized mind. The experience of going through the center of consciousness and emerging, as it were, on the other side is very much one of turning inside out. In our ordinary consciousness we are turned outwards towards the world-image which we externalized around us.

In going through our consciousness the entire process is reversed, we experience an inversion…that which was without becomes within. In fact, when we succeed in going through our center of consciousness and emerge on the other side, we do not so much realize a new world around us as a new world within us. We seem to be on the surface of a sphere having all within ourselves and yet to be at each point of it simultaneously…the outstanding reality of our experience…is the amazing fact that nothing is outside us.”

–excerpts from article by DONALD J. DEGRACIA, Associate Professor of Physiology at the Wayne State University School of Medicine, Detroit, in EDGESCIENCE MAGAZINE #16 • NOVEMBER 2013

artificial-intelligence-8-638

Recent research in artificial intelligence has begun to approach what might be described as a kind of tipping point, where the lines will likely begin to blur between what is clearly a type of machine intelligence, like the current offerings in robotics and self-driving cars, to something more akin to the kind of intelligence that talks back to you or responds in a more conversational manner like Apple’s “Siri,” and the Windows 10 offering of a personal assistant application called “Cortana.” Many of these innovations are built upon interest in the idea of eventually being able to develop the technologies surrounding A.I. to the point where they will function so much like the human brain, that communicating with them will be virtually indistinguishable from doing so with another live human person.

While this is an enormously appealing concept to our modern sensibilities, and currently fueling a huge amount of research in the industry, even supposing that it might be possible to produce a device or platform commensurate with the trillions of connections between neurons in the human brain, characterizing any resulting machine as either “intelligent” or “conscious,” requires us to re-examine what it means to be intelligent and conscious. Our current understanding of these terms, even as they apply to humans, is still not especially comprehensive or complete, and looking at the development of “human” or “biological” intelligence through the millennia, demonstrates a key component of the challenge in creating an artificial version that might qualify as equivalent.

artificial Human-Evolution1

Early humans and their fellow primates and mammals, along with all the various species endowed with sufficiently complex neural structures and central nervous systems, at some point, eventually possessed a brain or other neural configuration of adequate strength, size, and architecture, which allowed for the retention of memories, and for processing the sensory data gleaned through the available senses. These structures, from the most primitive to the most sophisticated, at some point provided the necessary support for adaptive learning or for acquiring a sufficient degree of species-specific abilities, in order for the organism to make efficient use of that information, and to produce a range of results, commensurate with their species-specific capacities and habitation, which enhanced their survival in their respective environments.

Once our ancient ancestors reached a certain level of development, through the integration of incremental evolutionary changes, they achieved a nominal degree of enhanced cognitive talents, attaining a sufficient capacity for what we describe as “human intelligence,” which eventually led to the ability to reason and plan well enough to override emotional distractions, needs and desires, and to awaken to a penetrating level of subjective self-awareness. As any parent of a healthy child can tell you, intelligence does not appear immediately even in modern human children. In spite of advantageous circumstances and environments in which these amazing cognitive human creatures develop, it still requires a minimal degree of relevant experience in the world to accumulate a useful and functional knowledge base, to hone learning skills, and to be able to draw on a collection of memories, which enhance whatever cognitive, genetic, and other physiological resources they might bring to the process.

As a consequence of the random combinations of chromosomes in the human reproductive process, there is a sufficient degree of diversity in the general distribution of combinations available to the human genome, so that each human child has a relatively unique set of circumstances genetically. This diversity is necessary for the health of our species, and as a result, we observe a full range of endowment, which can result in bestowing our descendents with a general baseline capacity for the development of cognitive efficiency, or at the other end of the spectrum, a potential for an enhanced intellectual development, right from the start. A vast array of cultural and environmental variables can either promote or inhibit whatever potential is present, and throughout human history, we have observed how a viable or disadvantageous environment, as well as individual initiative or apathy, can alter the equation in either direction.

It seems likely, in view of these mitigating factors, that it is through a combination of innate cognitive talent, genetic endowment, and environmental conditions that we see contributions to the general flow of intelligence either making a significant appearance, or faltering and struggling to gain ground, in much the same way as it has been since the earliest neural structures appeared in whatever creatures are still existent today. In every case, whatever degree of potential existed within a particular species, it was either successfully developed and exploited for survival, or ended up being thwarted by circumstances from developing successfully enough to sustain a niche for a particular species, resulting in their extinction.

artificial33

Our challenge in the 21st century is finding a way to determine which contributing factors for increasing intelligence can be safely selected by humans for the most productive incorporation into what we are currently describing as “artificial intelligence,” or “machine intelligence.” Unfortunately, no matter what we are ultimately able to do, in my view, we won’t be able to incorporate our humanity fully into machines, nor will we be able to artificially endow them with the experience of “being human.” In order for us to be aware of our experience of existing as a human being, while clearly requiring a variety of nominally functional, finely-tuned, and integrated biological systems, each of which are essential currently, because there is so much more to being a subjectively aware human person, there must be something that it is like to be human, which cannot be precisely replicated by any technological advancement or created through sheer engineering genius. The subjective experience of human consciousness utilizes our very human capacity for intelligence, as well as our access to a penetrating awareness provided by an astonishing array of electrochemical processes in our miraculous brains, but what we are accessing is not PRODUCED by the brain, but rather it is PERCEIVED by it.

It’s interesting to me how some scientists and thinkers in all the various fields of investigation into artificial intelligence believe that it is simply a matter of achieving a sufficient degree of complexity in the structures we devise for the processing of the voluminous data necessary to be equivalent to the human brain, constructing a sufficiently pliable, flexible, and interactive software, driven by the necessary algorithms, and we will eventually produce a sentient, intelligent, and conscious machine.

In his fascinating and expansive book entitled, “The Universe in a Nutshell,” Stephen Hawking posits that if “very complicated chemical molecules can operate in humans to make them intelligent,” it should follow that “equally complicated electronic circuits can also make computers act in an intelligent way.” He goes on to say that electronic circuits have the same problem as our chemical processes in the brain, which is to process data at a useful speed. He also rightly points out that computers currently have less computational power than “a humble earthworm,” and while they “have the advantage of speed…they show no sign of intelligence.” He also reminds us that even with our capacity for what we call intelligence, that “the human race does not have a very good record of intelligent behavior.”

2 brains

The possession of a capacity for intelligence of any sort, artificial or otherwise, is clearly not a “stand-alone” feature that is sufficient to sustain any species in and of itself. As we have observed throughout the evolutionary history of the natural world, constructing and sustaining a successful organism requires the development of a range of compensatory and complimentary abilities and potentials, commensurate with the designs and functions of a particular species, in order to achieve a requisite degree of balance.

In the case of Homo sapiens, our particular brand of human intelligence, as we currently understand it, appears to be primarily the result of human evolution and progress throughout our history as upright, bipedal, and increasingly cognitive beings. As a result, our species is apparently uniquely well-suited for our evolutionary niche, and dominates currently among the other living organisms, mostly for this very reason. While we share much in common with our primate and mammalian family of creatures, and bearing in mind that we are equally indebted to all living things and to the Earth itself for our continued ability to sustain ourselves, intelligence appears to exist in remarkably adaptive and unique ways in each of the various evolutionary paths for each family of species that coexist with us today.

It would be arrogant to suggest that our variety of intelligence is in any way superior to that enjoyed by other organisms on our planet, except in the context of its usefulness to our specific nature as humans. Our own highly-adaptive nature is fairly well-suited generally to the requirements of our species, and while one might reasonably argue that our inclinations and intelligence are lacking in one way or another, for the most part, even considering our limitations, foibles, and perceived deficits, human intelligence has managed to keep pace with the unfolding of our continued evolution thus far, and providing that we persist in developing and adapting to our ever-changing circumstances, there is cause for optimism in my view.

What we tend to miss in most of our estimations of what sort of artificial intelligence might emerge from our efforts to produce it, is that no matter what results are forthcoming, it will very likely be profoundly different than our own ultimately, in spite of how specifically we aim to recreate the mental processes and physiological structures of our own exquisitely adaptive brains.