Skip to main content

The Artisan and the Artilect, Part 3

December 6, 2005

{cs.r.title}









Contents
The Fovea of Consciousness
The Future of AI
Triumph of Will
The Artisan and the Artilect

There is always a purpose behind our actions. Whether an agenda is hidden or our cards are on the table, whether mindful and calculated, or blithe and unconscious, intent goads our thoughts and intentionality steers our actions. It is that will, that teleological impulse that drives every human creature, that serves as the theme for this third and final chapter in this series.

The first two installments explored two aspects in the developing nooshpere of artificial intelligence: 1) teleology (purpose and direction) and 2) epistemology (assumptions and world view). And now "will-cum-consciousness" enters the fray. If general-purpose, human-competitive (GPHC) machines are very nearly at our doorstep, as a growing number of respected academicians and luminaries would have us believe, then the word "consciousness" must at some point enter the discussion. And consciousness implies "will."

The Fovea of Consciousness

Consciousness. You know it. You have it. You use it. You wake to it every morning and put it away every night. The quality of it morphs over time and experience, and changes in response to the milestones of your existence. Consciousness is the very marrow of your humanity. Without it, you have no memory, no experience, no life. It defines you, inclines you, and refines you. But how would you define it?

Technically, consciousness escapes our grasp. In a wonderful 1999 paper (PDF) on the subject, John Searle (see Searle's Chinese Room argument against Strong AI) proffers a number of assertions regarding the study of consciousness and offers guidance for a community otherwise reluctant to tackle consciousness as a field of study:

"In my view the most important problem in the biological sciences today is the problem of consciousness. I believe we are now at a point where we can address this problem as a biological problem like any other. For decades research has been impeded by two mistaken views: first, that consciousness is just a special sort of computer program, a special software in the hardware of the brain; and second that consciousness was just a matter of information processing ... I believe, on the contrary, that understanding the nature of consciousness crucially requires understanding how brain processes cause and realize consciousness. Perhaps when we understand how brains do that, we can build conscious artifacts using some nonbiological materials that duplicate, and not merely simulate, the causal powers that brains have. But first we need to understand how brains do it." - John Searle, "Consciousness," Sept. 1999

Thus, according to Searle, if the emergence of a Strong AI is to occur, one that duplicates and not merely simulates intelligence, ultimately giving rise to actual GPHC machines, then we need to understand and engineer consciousness--and will.

Jeff Hawkins, playing the role of the artisan in this series, seems to agree with Searle, at least insofar as asserting the need to understand the physical structure of the human brain a little better before tackling AI in earnest. In On Intelligence, he writes that he believes "consciousness is simply what it feels like to have a neocortex." Daniel Dennett's computationalism (a computational model of consciousness) ensures him a lifetime of holiday invitations from Richard "More Darwinian than Darwin" Dawkins and his Blind Watchmaker Club. According to Dennett, most of what we attribute to intention is simply a projection of our own internal, fragmented state having little or nothing to do with intention per se.

In his 1994 book, The Muse in the Machine, Yale computer scientist David Gelernter discusses consciousness and the questions of AI, and in disagreement with Daniel Dennett, offers an analogy as follows:Star illusion

"Suppose that in this figure, the dark areas are cut-outs that have been glued to the page. You see a star. Dennett's appointed role is to tell us but wait, there isn't actually a star there. This is just a collection of separate cut-out fragments, none of them star-shaped. Sure you see a star, but that's just an illusion. If you have carefully followed my argument, you can now abandon the illusion and face facts. What's wrong with this argument? Well, the interesting point, of course, isn't the fact that there's no actual star-shaped object in the picture. Obviously there isn't. The interesting point is precisely that, despite that obvious fact, I still see a star. That's what's remarkable." - David Gelernter, The Muse in the Machine, p. 160

So do you see a star? Sorry, but there is no star. If you are like Gelernter, and despite the obvious fact that there is no star, and you nevertheless see a star, then perhaps your very own illusion of consciousness is suggesting to you that there is more to the puzzle of consciousness than Dennett would have us conclude. If, as Hawkins asserts, consciousness is simply what it "feels" like to have a neocortex, then perhaps a GPHC machine designed to mimic a human brain will simply give rise to consciousness. Will that machine then also see a star?

The Future of AI

Regardless of your preferences for watchmakers, blind or nearsighted, one thing is clear: both science and religion harbor the very same objectives, which Ray Kurzweil articulates in the title of his most recent book, The Singularity is Near: When Humans Transcend Biology. The recognition and fear of death, is as much a hallmark of sentience as is any other measure we can identify.

The question of identity is also fundamental to the future of AI, especially in light of the Kurzweil prediction of imminent biological transcendence. I must admit that, even during the most intoxicating period of evangelism at Sun, I had a difficult time imagining how the part of me that is "me," that illusion of observer in my personal Cartesian theater, could migrate to a non-biological entity and still remain essentially "me." Finally, while reading Kurzweil's latest tome, it dawned on me: remember Washington's Axe?

I can imagine a cognitive prosthesis--some digital mnemonic device that extends my personal storage capacity in some way, but accessed directly via my own thoughts, as if it were an integral part of my own cognitive system. I can imagine such a device merely enhancing my native capacity--once installed, I am suddenly able to remember more and better than I did before. I can imagine slowly transferring memories to the new system; perhaps as I remember something it is replicated in the new system. The storage architecture might even mimic Hawkins' theory as to how human memory works. But gradually, the lifetime of memories that I have accumulated find another substrate from which to unfold. And finally, the part of me that finds focus, that "will" part, the hominal CPU--finally, that part gets the upgrade too. And then it's done. I'm no longer a carbon-based life form. I have now transcended. My own inability to imagine how it might feel was always a barrier to my acceptance of the possibility. I am now pleased to report that that personal limit seems to have fallen away, and I can at least imagine a scenario in which Kurzweil could be correct; I don't buy it, but I can see how Kurzweil might have come to the view that full biological transcendence via technology might be possible.

On the other hand, unless the memory transfer includes the destruction of the original substrate, then "me" will continue to be me in my current instantiation, presumably in addition to the upgraded form. The implications go well beyond the scope of this series. Suffice it to say that if Kurzweil is right and we are at the doorstep of biological transcendence via technology, then the future of AI is no less daunting than any latter-day religious prophecy may be. In which case, de Garis may very well be prescient in his prediction of a twenty-first-century war of unprecedented destruction. These are the stories in the book entitled The Future of AI. The field of study is no longer an interesting but marginally useful aspect of computer science. It has become something far greater.

Coming full circle, back to the watchmaker, I suggest that we consider a version of Pascal's Wager meeting the Dawkins/Kurzweil school of thought as we ponder the future of AI:

Watchmaker

If we believe the Blind Watchmaker thesis, and we are correct, then Kurzweil's Singularity may be the result--provided we are willing to ignore the de Garis Cosmist versus Terran problem; regardless of the complexity, we will discern and harness the stuff of this universe such as to engineer biological transcendence. If we believe the Blind Watchmaker thesis and we are wrong, I suggest that entropy (in the thermodynamics sense) triumphs and we will never fully harness the potential of a universe that we fundamentally do not understand. In this outcome, it is we who are blind, refusing the see the workings of the watchmaker at all.

If, on the other hand, we believe a Nearsighted Watchmaker thesis, one that would embrace Wolfram, Dembski, and even Behe, and we are correct, might Pierre Tielhard de Chardin's Omega Point be beckoning just over the horizon? Or at least provide us with a deeper understanding of nature such as to facilitate a more harmonious conscious engagement with reality? Perhaps or perhaps not. But with such an approach, if we are wrong--then I claim we will have at least engaged the universe from a more noble perspective, one that equates beauty with truth and acknowledges the possibility that man is not the sole author of his destiny as Nietzsche would have us believe; an outcome where at the very least art is the result. Entropy and Art. Or the Kurzweil upside married to the de Garis risk versus Bootstrapping God. This is the adjacent possible as we contemplate this new epoch of artificial intelligence.

It's not your father's AI any more.

Triumph of Will

Creation is an act of will. There can be no dispute over the fact that human will is the driver of the wildly diverse set of creative activities from which civilizations grow. Buildings do not accidentally erupt. Software does not (yet) simply appear. Automobiles do not replicate by themselves and evolve. Human will, channeled through organizations and institutions, creates, distills, and propagates the fruits of our minds for the benefit of our environs. Conscious application of will and the power that is inherent therein is the basis for any triumph we might celebrate. And yes, there is a very dark side as well. Speaking of the 1935 propaganda film Triumph des Willens (Triumph of Will), by German filmmaker Leni Riefenstahl, Ryan Somma writes:

"I found myself overpowered by its majesty. I became swept up in its images of cooperation and goodwill between citizens, its hopeful vision of a better future, its themes of modernity, bringing society into a new age of possibility. An ideal is parading before my eyes, beautiful, perfect ... and then Adolph Hitler appears on the screen and a half-century of infamous history shatters the facade." - Ryan Somma, blogger and web designer

The triumph of will over conscience was the basis for Nietzsche's Ubermensch, or super-human, the codification of which provided Hitler with a philosophical justification for the evil that ensued. A new morality, one that demanded that the Nazis treat other human beings as Untermensch, or subhuman, was the result. In Thus Spoke Zarathustra, Nietsche's Ubermensch is the one who is willing to risk all for the sake of improving humanity. Compare the Ubermensch stance to that of the de Garis stated objective "to see humanity [... build] truly godlike artilects that tower above our puny human intellectual, and other, abilities" (The Articlect War), and the similarities should at least give us pause. The unintended consequences of our actions may be the least of our concerns, given the demonstrated capabilities of the Ubermenschian spirit to hijack the human psyche. The message for the GPHC machine fathers should be clear: be careful what you wish for.

The Artisan and the Artilect

We are at the threshold of what may be an epoch-magnitude change, ushered in by work in the maturing field of artificial intelligence. Classic approaches to AI have yielded interesting results, sometimes even reaching a human-competitive level (see Genetic Programming IV: Routine Human-Competitive Machine Intelligence). But the stated intentions of producing GPHC machines by more than one serious effort marks the beginning of a new era. The mind race is on.

At the outset, I framed this discussion of AI based on books by two very different authors, both of whom are making contributions to the field, and both of whom have stated goals of one day producing GPHC machines. The characterization of one as an artisan and the other as an artilect was a literary device and nothing more. In truth, Hawkins is just as much a reductionist as de Garis. And while the de Garis tome may to some smell of bad science fiction, it at least qualifies as art by some definition. So the roles might just as easily have been reversed.

When I began to think about this series, I decided it might be a good idea to attend the American Artisan Festival held in Centennial Park in Nashville in mid-June. Some 160 artists from around the country attended, representing their areas of expertise, including blown, stained, and etched glass; furniture, instruments, and clocks; wall hangings, weavings, and baskets; jewelry; decorative clay pieces; and quilts. As I wandered about the park that day, it occurred to me that there was a common thread with each crafted piece, which would help me in my research. So I began to ask the artisans I spoke with a single question about their work, as I admired their handicrafts: "How do you know when it's done?"

Some were surprised with the question. A few laughed. But all of the artisans I asked that question had one of two basic answers, which represent the opposing perspectives of the artisan and the artilect, and the ultimate fitness test in the watchmaker's landscape. How do you know when it's done? It is either 1) when it meets the specification or 2) when I like it.

The dichotomy of the artisan and the artilect is valid from the perspective of framing the discussion of the future of AI. The "how it feels" aspect of intelligence is just as important as the ability to calculate. If GPHC machines are to arrive, whether they are engineered, emergent, or engineered to emerge, the gestalt of a human experience must somehow be replicated in a machine. And while important, the neocortex alone is simply not sufficient to convey or contain the essence of man.

width="1" height="1" border="0" alt=" " />
Max Goff is the Chief Marketing Officer for Digital Reasoning Systems, Inc., headquartered in Nashville, Tennessee.