Skip to main content

The Artisan and the Artilect, Part 2

October 4, 2005

{cs.r.title}









Contents
Teleology and Intelligence
Revenge of the Artisan
The Religion of the Blind Watchmaker
The Nearsighted Watchmaker
The Artilect War

Despite the cacophony of disagreement surrounding the definition of
intelligence, artificial or otherwise, we cling to the illusion
that it is something we can detect and even measure. The href="http://en.wikipedia.org/wiki/Flynn_effect">Flynn Effect is the
observation that measured human
intelligence is steadily rising over time. It is
the hominal version of Moore's Law, which does not purport to measure
intelligence, per se, but rather the increasing density of transistors
in a given swath of silicon real estate over time. The increases due to Moore's Law are relatively easy to determine because we can measure the number of transistors
per economic unit for a specific processor and point to the producing
fabrication facilities as the source. Nothing as definitive can
be said of human intelligence, except that we
appear to be getting smarter on average, if IQ tests are an
appropriate
measure.

Teleology and Intelligence

There seems to be an increasing level of intelligence—this thing we cannot
adequately define but are very capable of testing for various
purposes. Whether silicon based or human, the
general level of intelligence is rising. The difference between
our digital offspring and ourselves is one of degree, not
direction. The following questions, posed in the first part of
this series, must be considered:

  • Where exactly is "intelligence," artificial and otherwise,
    headed?
  • Are we
    heir to a "teleology
    of
    intelligence," or simply stumbling forward in a drunkard's href="http://en.wikipedia.org/wiki/Random_walk">random
    walk?
  • Is there a discernible design or purpose that propels us
    forward?

The answer to that set of questions will fundamentally determine how we
approach the
artisan/artilect dichotomy we may face as Moore's Law continues
to
unfold:

  • How will we respond to the emergence of
    entities that have the potential to dramatically surpass human
    intelligence,
    individually and collectively?
  • If such entities are possible, how long do we have before we come
    face-to-interface with them?
  • What might they think of us?
  • Are we now at the dawn of that epoch?

To even begin contemplating a response, we require a href="http://dictionary.reference.com/search?q=presuppositional">presuppositional
framework, a "world view" of some
sort. To play the game, we must presume something in the way of a universe
external to our own personal, limited scope of direct experience. To that end, it should be clear that increasing intelligence is either
a natural
outcome
of the
universe or it is not. We may posit that the increasing
intelligence we
are witness to—whether it is biological or digital, whether one necessarily
leads to the other—is an emergent property of the universe in which
we live. Alternatively, we may dismiss such notions and retreat
to the random-walk
explanation for all manifestations of complexity, of which increasing
intelligence is simply another example, and therefore no more
remarkable than any other evolutionary development. Regardless
of presuppositional framework, we have
two choices when it comes to the teleology of intelligence: Either intelligence is natural, emergent,
and inevitable, or it is random, accidental, and ultimately entropic.

One temptation we should avoid is a retreat to a "random emergence"
explanation when it comes to discussions of the teleology of
intelligence, as it again unnecessarily muddies
the water. If we cannot acknowledge an intimate relationship
between physical laws and the phenomenon of emergence, then we will not
find safe harbor in either science or faith. When it comes to
complexity, either emergence is a function of the underlying physical
laws, or the physical laws are a function of the underlying emergence;
therefore, a "random emergence" explanation is a non-starter. In this
series I refer to the new phase of artificial intelligence (AI) exploration as
the "Hawkins/de Garis epoch," after Palm founder Jeff Hawkins and
professor Hugo de Garis, both of whom are pursuing general-purpose,
human-competitive intelligence devices. To illustrate
the two opposing approaches, Hawkins represents the
artisan and de Garis the artilect.

Revenge of the Artisan

Generally speaking, all
classic AI systems fall into one or both of these two categories:

  • Darwinian: Systems that learn or evolve based on some
    selection criteria
  • Rules-based: Systems that model the knowledge domain and infer a
    set of rules based on the model

An a priori
inference model is required in either case. With rules-based systems,
the knowledge domain must be modeled in advance. With Darwinian
systems, selection criteria (in other words, a teleology) is
required. The inherent problem, therefore, is one of
building a better model. Even Darwinian systems face this
problem; the tuning of selection criteria in order to augment outcome
is a common practice—indeed, augmentation in order to influence
outcome itself
betrays a teleological bias.
All this is to say that the artisan ethos is alive and well. The
fact that we have certain human awareness regarding truth and beauty,
which leads us to favor one outcome over another, should not be a
surprise when it comes to any endeavor, including work
in AI. But then doesn't this awareness strongly indicate a
presuppositional framework?

Human intelligence is different from any theory or attempt at AI
so far. The artisan understands this difference on a gut level.
Hawkins' theory
of human intelligence, while compelling, does not address this visceral
aspect of the human experience, relying instead on a pure neocortex
explanation for human intelligence.

According to Hawkins, the hominal
approach to data processing consists of four major
organizational elements, all aspects of the structure of the neocortex,
which:

  • Stores sequences of patterns: The order in which memories
    flow is based on sequential (temporal) storage structures. A good example of this
    is the flow of the melody in a song; we remember melodies in sequence
    as opposed to reverse order, or some other method. All human
    memory, according to Hawkins, is sequentially stored and accessed.
  • Recalls patterns auto-associatively: Memories are stored in the synaptic
    connections between neurons. While a vast amount of
    information is stored in the neocortex, only a few things are actively
    remembered at one time, because auto-associative memory ensures that
    only the particular part of
    the memory that is immediately relevant to the current context is
    activated. On the basis of
    these activated memory patterns, predictions are made—without us being
    aware of it—about what will happen next.
  • Stores patterns in an invariant form:
    The
    neocortex constantly receives sequences of patterns of information,
    which it stores by creating invariant
    representations
    . These are memories that are independent of salient
    details that handle variations in the world automatically.
    For instance, you can recognize your friend's face after he has
    shaved his beard.
  • Stores patterns in a
    hierarchy:
    The hierarchical structure of the
    neocortex plays an important role in perception and learning. Low
    regions in the structure of the neocortex make low-level predictions
    (information like color, time, and tone) regarding what is expected
    next, while higher-level regions make
    higher-level predictions about more abstract things. We
    "understand"
    something when the neocortex's prediction fits with the new
    sensory input. Whenever neocortex patterns and sensory patterns
    conflict, there is confusion and our attention is drawn to the anomaly,
    the details of which are then dispatched to higher neocortex
    regions to check if the
    situation can be understood on a higher level. If there are
    patterns somewhere else in the neocortex that more accurately
    fit the current sensory input, the anomaly is resolved.
    Otherwise, cognitive dissonance ensues and appropriate corrective measures
    (such as learning) are taken.

Learning requires the
building of internal models. During repetitive learning,
memories of the world first form in higher regions of the cortex, but as
we learn they are reformed in lower parts of the cortical
hierarchy. Well-learned patterns are represented low in the
cortex while new
information is sent to higher parts. Slowly, the neocortex
builds in itself a representation of the world it encounters.
According to Hawkins,
"The real world's nested structure is mirrored by the nested structure
of your cortex."

The Hawkins brain architecture
provides a plausible explanation for the efficiency and great speed of
the human
brain while dealing with complex tasks of a familiar kind. It is
interesting to note, however, that if Hawkins is correct, then it
follows that we do not actually see or hear precisely what is occurring
at any point in time. When
people are talking, by definition we don’t fully listen to what
they say. Rather, we are constantly predicting what they
will say next, and as long as
there seems to be a fit between prediction and incoming sensory
information, our attention remains rather low. Only when they say
something that actively conflicts with our prediction do
we pay attention. It is therefore the unexpected that provides
opportunities for learning—sort of like stumbling across a well-made watch in a verdant meadow on an uninhabited island.

The Religion of the Blind Watchmaker

This metaphor of discovering and explaining the unexpected watch in the meadow was first proposed by biologist Richard Dawkins and used in his 1986 book to illustrate
his view of evolution and complexity. The href="http://www.amazon.com/exec/obidos/tg/detail/-/0393315703/104-4971883-8156728?v=glance">blind
watchmaker thesis posits two basic
elements: variation (that is,
mutation) and cumulative natural selection. Variation is the
randomly occurring changes, at a genetic level, produced by spurious
events such as DNA errors in the cell reproduction process.
Generally, such mutations are either neutral or harmful in
effect. Sometimes, however, such changes slightly improve an
organism's ability to survive and
reproduce. Under these favorable mutation conditions, the stage is set
for the second element of the thesis with is cumulative
natural selection.

Biological organisms generally produce more offspring than can or will
survive to
maturity.
Offspring that possess mutative advantages can be
expected to produce more viable descendants, all things being
equal, than their non-mutant peers. The advantageous mutative
trait eventually spreads
throughout the species and becomes the norm for further
cumulative improvements in succeeding generations. Given
sufficient time and advantageous mutations, enormously complex
organs
and patterns of adaptive behavior can eventually be produced in tiny
cumulative steps, without the assistance of any preexisting
intelligence. All this can happen, that is, if the theory
is
true.

"One way or another, Darwinists meet
the question 'Is Darwinism
true?' with an answer that amounts to an assertion of power:
'Well, it is science, as we define science, and you will
have to be content with that.' Some of us are not content with
that, because we know that
the empirical evidence for the creative power of natural selection
is somewhere between weak and non-existent. Artificial selection
of fruit flies or domestic animals produces limited change within
the species, but tells us nothing about how insects and mammals
came into existence in the first place.
In any case, whatever artificial selection
achieves
is due to the employment of
human intelligence consciously pursuing a goal. The whole point of the
blind watchmaker thesis, however, is to establish what material
processes can do in the absence of purpose and intelligence. That
Darwinist authorities continually overlook this crucial distinction
gives us little confidence in their objectivity."
( href="http://www.origins.org/articles/johnson_blindwatchmaker.html">The
Religion of the Blind Watchmaker by Phillip E. Johnson)

The self-consistent castle of belief, which subscribers to Dawkins'
blind watchmaker thesis is built on, rests on a foundation that is no
less a leap of faith than any other religion; it is simply less
honest. Religions at least admit to a basis in faith.
Dawkins, a self-described atheist, seems to not even be aware that his
own presuppositional framework is also bound by a belief that
does not rest on a logical proof or material evidence.

When I asked for his views on the teleology of intelligence, href="http://www.goertzel.org/">Ben Goertzel, who, like Hawkins
and de Garis, also happens to be working on general-purpose href="http://novamente.net/">human-competitive systems, responded, "I don't
really believe in teleology, but I feel intuitively that there may be a
tendency in the universe for the amount of pattern to increase over
time—if so, then increasing intelligence would be a part of
this...".

The Nearsighted Watchmaker

Intuition? Increasing "pattern?" Okay. So what is
that?

Much of Goertzel's own AI work predates and foreshadows observations
in the Hawkins opus in terms of the nature of human intelligence
(although Goertzel goes well beyond Hawkins with speculations regarding
the universal nature of consciousness itself). If anything,
Hawkins' On Intelligence
falls short of the mark that Goertzel's prolific
theoretical work has established with respect to a complete proposal
for a general-purpose, human-competitve machine. (See the href="http://agiri.org/">Artificial General Intelligence Research
Institute site as well as Goertzel's home page for a threshold view
of his world.) Indeed, the four salient aspects of
human intelligence that Hawkins articulates are also reflected in
Goertzel's own work, though presented in different terminology.
While Goertzel has come to some similar conclusions with respect to the
nature of human intelligence, he goes href="http://www.goertzel.org/dynapsyc/2004/OnBiologicalAndDigitalIntelligence.htm">beyond
Hawkins by identifying other characteristics of human intelligence,
including the twin complexity-era concepts of href="http://en.wikipedia.org/wiki/Strange_attractor">attractors
and emergence,
neither of which is addressed by Hawkins in what is otherwise a href="http://en.wikipedia.org/wiki/Reductionism">reductionist
apologetic.

But for Goertzel to side-step the notion of a teleology of intelligence
with concepts like "intuition" and a universal tendency toward "pattern
increase" is a little awkward, to say the least. It seems that when
evidence fails to fit our models and cannot be rationally explained
otherwise, it becomes part of that "other" category, or an intuitive
understanding that people simply have, requiring no further
examination. Ironically, this too betrays a presuppositional
framework that humans have an intuitive modality for knowledge
acquisition that we cannot or will not explain, but will effectively
ignore when it comes to defining and understanding the knowledge
creation process.

But then Goertzel also genuflects at the altar of emergence, which
itself now seems to be a conceptual attractor for a growing audience of
reformed reductionists. See Nobel Laureate Robert Laughlin's
recent book href="http://physics.about.com/od/philosophy/a/ADifferentUnive.htm">A
Different Universe -
Reinventing Physics from the Bottom Down for a wonderful discussion
of this emerging Age of Emergence. The gnarly problem with a
full-frontal
embrace of emergence, however, is the fact that the label simply masks
entire swaths of unexplained phenomena with a term that simply allows
us to move on without further consideration. By definition, the
emergent processes or behaviors cannot be explained otherwise and are
therefore emergent. Punt. Declare scientific victory and
move on.

Upon closer examination, a growing array of assumptions on which much
of our current understanding of the universe is based rests on feet of
emergent clay. Solid matter itself, for example, is an emergent
property of self-organized molecules that do not individually reflect
any of the properties of this emergent feature. Emergence also applies to (that is,
masks our complete understanding of) capital 'L' Life,
consciousness, human thought, and the entire field of evolutionary
studies.

Perhaps Dawkins' watchmaker is not as blind as he would have us believe,
with belief itself being the
critical factor. The leap of faith
that Dawkins must embrace in order to codify his religion of the Blind
Watchmaker is no less than the one required to accept more of a
Nearsighted
Watchmaker; one who sees very well the very, very small, and who
understands the
essence of the inherent emergence above based on properties so well
designed below. Perhaps, in the end, it will always reduce to a
matter of belief.

The Artilect War

If the artilect were given to laughter, I am sure the
previous paragraph would have elicited a bit. But the
artilect, like
much of AI itself, is entirely inscrutable from our perspective.
As
such, we cannot know what makes it laugh or even if it does.
Although according
to Hugo de Garis, the unknowable nature of much of classic AI, not to
mention the wildly improved Artilect version 2.0 to come, isn't
of
concern, as the artilect will far surpass human intelligence in fairly
short order—just as soon as "... neuroscience understands why
intelligence levels of human beings differ because of differences in
their neural structures, [then] it will be possible to create an
intelligence theory, which can be used by neuro-engineers to increase

the intelligence of the artificial brains they build."

Are we clear on that? Once we fully understand human
intelligence, which is an emergent (that is, we can't actually explain it)
property
of the underlying neural structures, we will be able to engineer around
those principles in order to build better artificial brains.
Makes perfect sense, doesn't it?

In all fairness, both Hawkins and de Garis would probably not assert
that human intelligence is an emergent
property of the underlying neural structures; we can give credit
to Goertzel and Laughlin for those insights. Which is to
say that the Hawkins/de Garis epoch is probably the last chapter of
reductionist thinking in the AI space—and the end of research in
classic AI, at least insofar as the creation of a general-purpose,
human-competitive machine is concerned.

The real Artilect War is not one that de Garis has imagined; it is not
some Bill Joy nightmare that pits man against machine or man against
machine builder. The real Artilect War is the clash of
wills that inevitably occurs when human civilizations
collide. The quest for general-purpose, human-competitive
machines is no less a measure of civilization than any other technical
achievement. And how we view the world is our presuppositional
framework for civilization itself. It is that collision of
civilizations, through the looking glass of AI, that is
chronicled in the final chapter in this series.

width="1" height="1" border="0" alt=" " />
Max Goff is the Chief Marketing Officer for Digital Reasoning Systems, Inc., headquartered in Nashville, Tennessee.