Skip to main content

The Artisan and the Artilect, Part 1

July 19, 2005


Defining Intelligence
A Really Brief History of AI
A New Phase in AI
The Age of the Artisan
Enter the Artilect
The Teleology of Intelligence

"The opposite of a correct statement is a false statement. But
the opposite of a profound truth may well be another profound
truth." -- Niels Bohr

This series of articles explores the emergence of a new phase of
artificial intelligence (AI). We will consider assumptions and
discoveries of the R&D behind it and review some of the
literature surrounding these new developments by juxtaposing two
very different concepts: the ancient trade of artisan and the
concept of a godlike artificial intellect (the "artilect"), which
some believe will be the inevitable creation of the 21st century.
Theory, opinion, and speculative developments are considered in this
foray into the world of artificial intelligence and
human-competitive machines.


Artisans have been with us since the dawn of humanity. In the
broadest sense, an artisan is a skilled craftsperson; one who
imagines, plans, and builds something with their hands. Our
cognitive emergence roots may be much, much older than we
previously thought. Kate Wong, the editorial director of, reports in " "">
The Morning of the Modern Mind
" that evidence for symbolic
thinking capabilities, a characteristic of modern man, dates back
several hundred thousand years. Artisans were the first and most
definitional avocation of humanity. Activities such as the making of jewelry
by aboriginal artisans were the first expressions of
symbolic thinking, which is the essence that separates us from our
less semiotic cousins.

Our ancient ancestors used tools to create things of symbolic
value prior to the advent of language. The artisan and tool
together defined and enabled the emergence of modern man. The lucky
accidental genetic modification that gave rise to big-brained
primates who thought symbolically produced a race of eco-fit
artisans, and we are their children. From ancient craftsman to Java
developer, the unbroken line of semiotic workers has produced an
ever-growing noosphere which seems to now be in something of an
accelerated inflationary phase.

Defining Intelligence

Although theories of intelligence abound, there is no single
standard by which we measure intelligence in human beings.
Therefore, when it comes to the replication of human intelligence
in machine form, there is no one specification or set of conditions
that provides guidance. Even the "">Turing Test,
which is probably the most cited cybernetic holy grail of
human-competitive machine intelligence, is not universally accepted
as an appropriate measure. In "">On Intelligence,
Palm Computing founder Jeff
Hawkins writes:

"To converse like a human on all matters (to pass the Turing
Test) would require an intelligent machine to have most of the
experiences and emotions of a real human, and to live a humanlike
life. Intelligent machines will have the equivalent of a cortex and
a set of senses, but the rest is optional. It might be entertaining
to watch an intelligent machine shuffle around in a humanlike
body, but it will not have a mind that is remotely humanlike unless
we imbue it with humanlike emotional systems and humanlike
experiences. That would be extremely difficult and, it seems to me,
quite pointless." (On Intelligence, p. 208)

Intelligence may be slippery to define, but that hasn't slowed
the slope of investments made attempting to create artificial
versions of it. Turing's influence, coupled with the red herring
that the Turing Test now seems to be, may have actually been
obstacles to seminal achievements in AI. While the results may not
have been Turing Test winners to date, change is in the wind. And
with change will come concerns, as the changes we must now consider
promise to dwarf the technological impact of previous epochs.

A Really Brief History of AI

If we were to chart the history of AI from the first known usage
of the term, the really "">brief history of AI
condensed to some of the most important developments would look
like the following:

  • 1956: John McCarthy coins the term "artificial intelligence" at
    the first AI conference. Lisp, semantic nets, and ELIZA follow.
  • 1967: Richard Greenblatt at MIT builds a knowledge-based
    chess-playing program, MacHack, that was good enough to achieve a
    class-C rating in tournament play. Prolog, expert systems, and
    knowledge representation come next.
  • 1976: Doug Lenat's AM program (Stanford PhD dissertation)
    demonstrates the discovery model (loosely guided search for
    interesting conjectures).
  • 1987: Marvin Minsky publishes "">The Society of
    , a theoretical description of the mind as a
    collection of cooperating agents.
  • 1990s: Major advances occur in all areas of AI, with successes
    in machine learning, intelligent tutoring, case-based reasoning,
    multi-agent planning, scheduling, uncertainty reasoning, data
    mining, natural language understanding and translation, vision,
    virtual reality, games, and other areas.
  • 1997: The Deep Blue chess program beats the current world chess
    champion, Garry Kasparov, in a widely followed match.
  • Late 1990s: Web crawlers and other AI-based information
    extraction programs become essential in widespread use of the Web.

Full stop; end of story. We could leave it at that, and call it
complete: all you ever really needed to know about AI, at least
"classic AI," in two hundred words or less. This, of course,
ignores the classic contributions of Aristotle, Decartes, Hobbes,
Pascal, Leibnitz, Boole, Babbage, Whitehead, Russell, and even

Next, throw in the following four major application categories
of AI systems, and we have enough for impressive cocktail party

  • Expert systems: Narrow and deep (domain-specific) knowledge
  • Natural language processing (NLP): Systems that analyze,
    understand, and generate languages that humans use naturally.
  • Pattern recognition: Systems that classify data (patterns)
    based on either a priori knowledge or on statistical information
    extracted from the patterns.
  • Intelligent device control: Systems that make time-sensitive
    adjustments to devices based on sensory input.

There is no such thing as a general-purpose artificial
intelligence application; hence, the lack of Turing Test successes,
which requires a much wider operating context and world view. AI
applications to date can be very smart in their specific domains but
about as bright as a rutabaga otherwise (akin to a few college
professors I've known). Some would argue that it's just a question
of time and horsepower--if you throw enough CPU cycles, fast
memory and some evolutionary engineering (whatever that means)
against a wall, enough will stick to give rise to a general-purpose, human-competitive entity.

A New Phase in AI

In the last few years, a new phase in AI started that is
comprised of three major components: 1) the utilization of classic
AI components in real-world, useful applications; 2) the emergence
of a new phase in AI R&D; and 3) the stated objective by more
than one organization to build the machine equivalent of a human

The utilization of classic AI approaches for solving real-world
problems has been steadily waxing for at least a decade. We
actually use AI-manufactured products and AI applications on a
regular basis. Do you drive a car that was manufactured after 1995?
There was a robot somewhere on the assembly line, controlled in
some way by an AI component. Do you buy or sell stocks? AI
applications are widely available and used on a regular basis in
the financial world. Do you use an auto-focus camera? Or a cell
phone? Do you eat food? As it turns out, AI-based methods have been
slowly creeping into the programmatic zeitgeist, finding quarter in
a rainbow of market sectors, without the bother of significant
media attention. As a matter of fact, the new phase in AI doesn't
even use the word "intelligence." A synopsis of AI inroads into
successful real-world applications includes:

  • Rules and heuristics: Emergency response, diagnostics, and
  • Neural networks: Fraud prevention, terrorist tracking, and
    consumer preference predictions.
  • Bayesian networks: financial analysis, natural language
    processing, and software usability.
  • Genetic and evolutionary algorithms: Molecular design
    applications, complexity analysis.
  • Hybrid models: Machine vision and sensor applications.

All of these developments use "classic" AI; in other words,
approaches to artificial intelligence and computing that are
recognized and taught in a number of computer science programs
around the world. While this is laudable, it is the new phase of AI
that is of greater interest--the "Hawkins/de Garis" epoch of AI,
which I have named for two esteemed gentlemen who have contributed
influential thoughts in recent years, the sum of which will change
the course of future AI work quite dramatically and forever.

What makes this phase "new?" Two characteristics epitomize the
new phase in AI; the two terms in question also happen to serve as
the juxtaposed entities in the title of this series: the artisan
and the artilect. The artisan-like characteristics of this new
phase in AI represent the unique semiotic skills of humanity, which
form the basis for our claim to intelligence itself. It is the
"engineering is the art of the possible" school of craftsmanship,
which has taken raw learning and turned it into commerce. Indeed,
all commercial instantiations of AI in the past decade have been
enabled by the artisan component of the human complex. To now seed
AI itself with the creative essence of homo sapiens clearly marks
the beginning of a remarkable new phase. That, coupled with the
artilect, the imaginary "god-like," hyper-super-duper-megalithic
uber system of doom with more qbits and computational capability
than has ever existed in this galactic quadrant, represents the end
game in the Moore's Law saga. We are a few short years, or decades,
or generations away from the emergence of the artilect, depending on who you believe.

The Age of the Artisan

This new phase in AI begins with Jeff Hawkins, the ultimate
artisan. Hawkins is a founder of Palm and Handspring and almost
single-handedly revived the handheld computing industry--the
same Jeff Hawkins who worked with his father in a boat yard in Long
Island, inventing crazy boat stuff while growing up. Hailed by
Nobel laureates and venture capitalists alike, his gauntlet-laying
2004 opus On Intelligence breaks away from all of the classic
approaches and posits anew the path to AI nirvana. Hawkins begins
with a concept the entrenched AI community has failed to
acknowledge since the beginning: human intelligence. In his book,
Hawkins cites ingrained resistance to designing machine
intelligence models based on the masterpiece, nature's finest
cognitive creation: the human neocortex. Why? Probably because we
simply do not understand how it works. How can we reverse engineer
a device that is so inscrutable?

According to Hawkins, when he left Intel to study intelligence
at MIT in 1981, the current thinking in the field was that we
should not limit ourselves to the messiness of nature's flawed
instruments when we can do so much better with our well-reasoned
designs. Hawkins argued that the starting point must be real
intelligence if we are to ever have hope of creating a machine
version. Years later, together with Sandra Blakeslee, he wrote his
account of his investigations into the workings of the human brain
in his book. The book details much of Hawkins' research and
his theories as to how the instrument actually works, and how that
elegant design can imbue machines with something much more akin to
human intelligence than anything yet seen.

An artisan-cum-entrepreneur, there was nothing else Hawkins
could do after finally releasing the fruit of his 25-year
investigation except to start yet another industry-disrupting
device company. Go to ""> and see for yourself the
website of the firm that, if successful, will provide general
purpose, truly human-competitive devices. If you think out-sourcing
was bad, wait until what "">Dick Samson calls
"off-peopling" moves from the early-adopter to the early majority
phase--all jobs from truck drivers, burger flippers, and airline
pilots to (yes) programmers, customer service representatives, and
the bulk of the police force will be off-sourced--automated. No people needed. Even a marginally successful Numenta will force
major structural changes the likes of which we cannot even
anticipate. And that's the good news.

Enter the Artilect

The other side of the "new phase of AI" coin is even less
cheerful. Hugo de Garis published The Artilect War with
the term "war" in the title for a very good reason. Because as
obtuse as his bloviated text tends to read, he at least understands
what our species is really good at: the artisan apogee of
accomplishment is war. The thesis of de Garis' book is that there's
only room on this small globe for one dominant species--hence,
inevitable conflict will erupt, the magnitude of which will be more
devastating than all previous wars combined, because unfettered use
of 21st-century weapons will be involved. It will be the "Cosmists
versus the Terrans," in his nomenclature. The Cosmists being those
highly enlightened human beings (like him) who are in favor of
building artilects, despite the risk that these inscrutable
entities may not like us very much. The Terrans, the less-evolved
of us who simply cannot understand why we would ever risk building
machines with the innate capacity to annihilate the entire artisan
species, will not like the Cosmists very much, in the de Garis
mythology. Therefore, war will erupt. Big war. The summation of all
wars. And this will all come about due to Moore's Law, quantum
computing, and the "evolutionary engineering" techniques which de
Garis favors in his own "">brain-building

If my sarcasm has not yet successfully communicated my disdain
for the de Garis hardcover, let me be clear: his book is tripe. As
a sometime writer of tripe, I know it when I smell it, and his
entire edition qualifies. The only possible exception is in the few
pages he devotes to the merits of reversible logic vis-a-vis
entropy and heat generation. From an artisan perspective, this is
useful to know. But a single website like "">this one could
have done just as well and left the rest of the nonsense where it
belongs: in the "bad science fiction" bin at a Super Saver near

I cannot fairly judge de Garis' research, which one would hope
is of higher quality than the prose, plot, and purpose of his
writing (one gets the distinct impression that de Garis yearns for
the celebrity-like status of Kurzweil, et al). What little he does
have to say about his work in the book bears no resemblance to the
innovative, well-considered investigation that Hawkins articulates.
It seems as though the de Garis approach to machine intelligence is
brute force--accept anything Turing had to say on the matter as
gospel, wait for Moore's Law (which, like love, it would seem,
covers a multitude of sins), and trust in "evolutionary
engineering," which he doesn't explain in any detail in his book--and by the time his grandchildren are of the majority, magic will
happen, artilects will appear, and the war is on. In all fairness,
de Garis has "">published
more in-depth material
, which may prove to be of some use in the
pursuit of AI solutions and even vindicate his evolutionary claims, although the work does appear to embrace a "kitchen sink"
philosophy, which is probably not useful and certainly not

So why bother with de Garis? Why even consider "artilects" in a
discussion of a new phase in AI? The answer is simple: because we
simply do not know if general purpose, human-competitive
intelligence can actually be engineered. We can imagine such
machines. We often have: The Terminator, The Matrix, I Robot,
the Borg of Star Trek. Our musings generally imbue these
creations with evil, bone-munching intent. Before the Turing Test,
Norbert Wiener
co-opted the term "cybernetics" to christen a field of study and an
intellectual movement that fundamentally equates human beings with
machines. If the universe is a clock, we are the cogs. Can such
machines be built? According to "">
The Teleological Society
(after WW2, it became the
Cybernetics Group and the Macy Conferences), which Wiener helped to
found with cohorts like fellow mathematician John von Neumann
and anthropologist Margaret Mead,
they already have. Look in a mirror to see an image of one. But
that begs the question. Can we engineer them? Can we
design, assemble, test and bless a machine with general purpose
human-competitive intelligence? Can the artisan build the artilect?
If so, it will probably be based on work more akin to a
Hierarchical Temporal Memory system that Hawkins touts. The de
Garis scenario is, therefore, something we must at least consider.
Hawkins and de Garis are both out to build brains--mechanical
brains with the right stuff to actually rival our own

The Teleology of Intelligence

The " "">anthropic
," first proffered by physicist Brandon Carter in 1973
at a conference in Kraków celebrating the 500th birthday of
Copernicus, hints as to the nature of being and purpose. His paper
"Large Number Coincidences and the Anthropic Principle in
Cosmology" stated: "Although our situation is not necessarily
central [it was a Copernican celebration after all], it is
inevitably privileged to some extent."

The designed-for-intelligence (-by-intelligence?) thesis presents
circumstantial evidence for the specialness of the parameters of
our universe insofar as it necessarily supports the emergence of
universe-modeling intelligence: us artisans. As it would happen,
the "Cogito ergo mundus talis est" mantra ("I think, therefore the
world is as it is") that Carter first articulated has given rise to
much debate in our church-versus-state, science-versus-religion,
post-Enlightenment, post-modern era. The anthropic principle
constrains the Big Bang; the initial conditions of the universe had
to have been precisely tuned such as to give rise to the very
specific set of laws that govern it, or else we wouldn't be here to
reflect on the matter. There was such little room for deviation
with so many physical variables that it is quite implausible to
imagine coincidence at the helm. More on that later.

The term "teleology" refers to the study of design or purpose in
natural phenomena. Darwin would argue that the purpose of
intelligence is a natural outcome in the struggle for fitscape
dominance--greater intelligence happens to do better at the
propagation and survival game than does lesser intelligence--and
no other explanation is needed. However, according to John Smart, a
complexity theorist and editor of the advanced futurist publication

"[A]s local computational complexity increased, our natural
genetic parameters have became [sic] increasingly self-selected
(e.g., rationally-directed, by human society, first through mating
choices, and then through medical intervention). This is indeed
the apparent teleology, or purpose, of intelligence
, to move us
from evolutionary (random, chaotic) to developmental (statistically
predictable) contexts."

This implies that evolution itself as well as cultural
developments, technology, and even the yearning for/fear of
artilects is precisely the direction we are naturally goaded to
take. The teleology of intelligence first gives rise to the
artisan, and ultimately leads to the artilect. And now both of
these poles of human endeavor, as epitomized by Hawkins and de
Garis, are pursuing the ultimate engineering feat: building a
general purpose, human-competitive brain.

So where exactly is AI headed? Are we heir to a teleology of
intelligence, or simply stumbling forward in a drunken, random
walk? How will we respond to the emergence of entities that have
the potential of dramatically surpassing human intelligence,
individually and collectively? Are such entities possible? If so,
how long do we have before we come face-to-interface with them? And
what might they think of us? Are we now at the dawn of that epoch?
These, and other interesting questions are considered in the next
installment in this series.

width="1" height="1" border="0" alt=" " />
Max Goff is the Chief Marketing Officer for Digital Reasoning Systems, Inc., headquartered in Nashville, Tennessee.