The Drexler-Smalley Debate on Molecular Assembly

http://pubs.acs.org/cen/coverstory/8148/8148counterpoint.html

 

http://iranscope.ghandchi.com/Anthology/Drexler-Smalley.htm

 

 

 

 

Ray Kurzweil-The Drexler-Smalley Debate on Molecular Assembly

http://www.kurzweilai.net/meme/frame.html?main=/articles/art0604.html

The Drexler-Smalley Debate on Molecular Assembly

by Ray Kurzweil

 

Nanotechnology pioneer Eric Drexler and Rice University Professor and Nobelist Richard Smalley have engaged in a crucial debate on the feasibility of molecular assembly. Smalley's position, which denies both the promise and the peril of molecular assembly, will ultimately backfire and will fail to guide nanotechnology research in the needed constructive direction, says Ray Kurzweil. By the 2020s, molecular assembly will provide tools to effectively combat poverty, clean up our environment, overcome disease, extend human longevity, and many other worthwhile pursuits, he predicts.



 

Published on Kurzweilai.net Dec. 1, 2003.

Nanotechnology pioneer Eric Drexler and Rice University Professor and Nobelist Richard Smalley have engaged in a crucial debate on the feasibility of molecular assembly, which is the key to the most revolutionary capabilities of nanotechnology.  Although Smalley was originally inspired by Drexler's ground-breaking works and has himself become a champion of contemporary research initiatives in nanotechnology, he has also taken on the role of key critic of Drexler's primary idea of precisely guided molecular manufacturing. 

This debate has picked up intensity with today's publication of several rounds of this dialogue between these two pioneers.  First some background:

Background: The Roots of Nanotechnology

Nanotechnology promises the tools to rebuild the physical world, our bodies and brains included, molecular fragment by molecular fragment, potentially atom by atom.  We are shrinking the key feature size of technology, in accordance with what I call the "law of accelerating returns," at the exponential rate of approximately a factor of 4 per linear dimension per decade.  At this rate, the key feature sizes for most electronic and many mechanical technologies will be in the nanotechnology range, generally considered to be under 100 nanometers, by the 2020s (electronics has already dipped below this threshold, albeit not yet in three-dimensional structures and not self-assembling).  Meanwhile, there has been rapid progress, particularly in the last several years, in preparing the conceptual framework and design ideas for the coming age of nanotechnology. 

 

Most nanotechnology historians date the conceptual birth of nanotechnology to physicist Richard Feynman's seminal speech in 1959, "There's Plenty of Room at the Bottom," in which he described the profound implications and the inevitability of engineering machines at the level of atoms:

 

"The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom.  It would be, in principle, possible. . . .for a physicist to synthesize any chemical substance that the chemist writes down. . .How?  Put the atoms down where the chemist says, and so you make the substance.  The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developed – a development which I think cannot be avoided."

An even earlier conceptual root for nanotechnology was formulated by the information theorist John Von Neumann in the early 1950s with his model of a self-replicating system based on a universal constructor combined with a universal computer.  In this proposal, the computer runs a program that directs the constructor, which in turn constructs a copy of both the computer (including its self-replication program) and the constructor.  At this level of description, Von Neumann's proposal is quite abstract -- the computer and constructor could be made in a great variety of ways, as well as from diverse materials, and could even be a theoretical mathematical construction.  He took the concept one step further and proposed a "kinematic constructor," a robot with at least one manipulator (arm) that would build a replica of itself from a "sea of parts" in its midst. 

It was left to Eric Drexler to found the modern field of nanotechnology, with a draft of his seminal Ph.D. thesis in the mid 1980s, by essentially combining these two intriguing suggestions.  Drexler described a Von Neumann Kinematic Constructor, which for its "sea of parts" used atoms and molecular fragments, as suggested in Feynman's speech.  Drexler's vision cut across many disciplinary boundaries, and was so far reaching, that no one was daring enough to be his thesis advisor, except for my own mentor, Marvin Minsky.  Drexler's doctoral thesis (premiered in his book, Engines of Creation in 1986 and articulated technically in his 1992 book Nanosystems) laid out the foundation of nanotechnology and provided the road map still being pursued today. 

Von Neumann's Universal Constructor, as applied to atoms and molecular fragments, was now called a "universal assembler."  Drexler's assembler was universal because it could essentially make almost anything in the world.  A caveat is in order here.  The products of a universal assembler necessarily have to follow the laws of physics and chemistry, so only atomically stable structures would be viable.  Furthermore, any specific assembler would be restricted to building products from its sea of parts, although the feasibility of using individual atoms has been repeatedly demonstrated. 

Although Drexler did not provide a detailed design of an assembler, and such a design has still not been fully specified, his thesis did provide extensive existence proofs for each of the principal components of a universal assembler, which include the following subsystems:

  • The computer: to provide the intelligence to control the assembly process.  As with all of the subsystems, the computer needs to be small and simple.  Drexler described an intriguing mechanical computer with molecular "locks" instead of transistor gates.  Each lock required only 5 cubic nanometers of space and could switch 20 billion times a second.  This proposal remains more competitive than any known electronic technology, although electronic computers built from three-dimensional arrays of carbon nanotubes may be a suitable alternative.

     
  • The instruction architecture: Drexler and his colleague Ralph Merkle have proposed a "SIMD" (Single Instruction Multiple Data") architecture in which a single data store would record the instructions and transmit them to trillions of molecular-sized assemblers (each with their own simple computer) simultaneously.  Thus each assembler would not have to store the entire program for creating the desired product.  This "broadcast" architecture also addresses a key safety concern by shutting down the self-replication process if it got out of control by terminating the centralized source of the replication instructions.  However, as Drexler points out[1], a nanoscale assembler does not necessarily have to be self-replicating. Given the inherent dangers in self-replication, the ethical standards proposed by the Foresight Institute contain prohibitions against unrestricted self-replication, especially in a natural environment.

     
  • Instruction transmission: transmission of the instructions from the centralized data store to each of the many assemblers would be accomplished electronically if the computer is electronic or through mechanical vibrations if Drexler's concept of a mechanical computer were used. 

     
  • The construction robot: the constructor would be a simple molecular robot with a single arm, similar to Von Neumann's kinematic constructor, but on a tiny scale.  The feasibility of building molecular-based robot arms, gears, rotors, and motors has been demonstrated in the years since Drexler's thesis, as I discuss below.

     
  • The robot arm tip: Drexler's follow-up book in 1992, Nanosystems: molecular machinery, manufacturing, and computation, provided a number of feasible chemistries for the tip of the robot arm that would be capable of grasping (using appropriate atomic force fields) a molecular fragment, or even a single atom, and then depositing it in a desired location.  We know from the chemical vapor deposition process used to construct artificial diamonds that it is feasible to remove individual carbon atoms, as well as molecular fragments that include carbon, and then place them in another location through precisely controlled chemical reactions at the tip.  The process to build artificial diamond is a chaotic process involving trillions of atoms, but the underlying process has been harnessed to design a robot arm tip that can remove hydrogen atoms from a source material and deposit it at desired location in a molecular machine being constructed.  In this proposal, the tiny machines are built out of a diamond-like (called "diamondoid") material.  In addition to having great strength, the material can be doped with impurities in a precise fashion to create electronic components such as transistors.  Simulations have shown that gears, levers, motors, and other mechanical systems can also be constructed from these carbon arrays.  Additional proposals have been made in the years since, including several innovative designs by Ralph Merkle[2].  In recent years, there has been a great deal of attention on carbon nanotubes, comprised of hexagonal arrays of carbon atoms assembled in three dimensions, which are also capable of providing both mechanical and electronic functions at the molecular level. 

     
  • The assembler's internal environment needs to prevent environmental impurities from interfering with the delicate assembly process.  Drexler's proposal is to maintain a near vacuum and build the assembler walls out of the same diamondoid material that the assembler itself is capable of making. 

     
  • The energy required for the assembly process can be provided either through electricity or through chemical energy.  Drexler proposed a chemical process with the fuel interlaced with the raw building material.  More recent proposals utilize nanoengineered fuel cells incorporating hydrogen and oxygen or glucose and oxygen.

Although many configurations have been proposed, the typical assembler has been described as a tabletop unit that can manufacture any physically possible product for  which we have a software description.  Products can range from computers, clothes, and works of art to cooked meals.  Larger products, such as furniture, cars, or even houses, can be built in a modular fashion, or using larger assemblers.  Of particular importance, an assembler can create copies of itself.  The incremental cost of creating any physical product, including the assemblers themselves, would be pennies per pound, basically the cost of the raw materials.  The real cost, of course, would be the value of the information describing each type of product, that is the software that controls the assembly process.  Thus everything of value in the world, including physical objects, would be comprised essentially of information.  We are not that far from this situation today, since the "information content" of products is rapidly asymptoting to 100 percent of their value. 

In operation, the centralized data store sends out commands simultaneously to all of the assembly robots.  There would be trillions of robots in an assembler, each executing the same instruction at the same time.  The assembler creates these molecular robots by starting with a small number and then using these robots to create additional ones in an iterative fashion, until the requisite number of robots has been created. 

Each local robot has a local data storage that specifies the type of mechanism it is building.  This local data storage is used to mask the global instructions being sent from the centralized data store so that certain instructions are blocked and local parameters are filled in.  In this way, even though all of the assemblers are receiving the same sequence of instructions, there is a level of customization to the part being built by each molecular robot.  Each robot extracts the raw materials it needs, which includes individual carbon atoms and molecular fragments, from the source material.  This source material also includes the requisite chemical fuel.  All of the requisite design requirements, including routing the instructions and the source material, were described in detail in Drexler's two classic works.

The Biological Assembler

Nature shows that molecules can serve as machines because living things work by means of such machinery.  Enzymes are molecular machines that make, break, and rearrange the bonds holding other molecules together.  Muscles are driven by molecular machines that haul fibers past one another.  DNA serves as a data-storage system, transmitting digital instructions to molecular machines, the ribosomes, that manufacture protein molecules.  And these protein molecules, in turn, make up most of the molecular machinery.

 -- Eric Drexler

The ultimate existence proof of the feasibility of a molecular assembler is life itself.  Indeed, as we deepen out understanding of the information basis of life processes, we are discovering specific ideas to address the design requirements of a generalized molecular assembler.  For example, proposals have been made to use a molecular energy source of glucose and ATP similar to that used by biological cells. 

Consider how biology solves each of the design challenges of a Drexler assembler.  The ribosome represents both the computer and the construction robot.  Life does not use centralized data storage, but provides the entire code to every cell.  The ability to restrict the local data storage of a nanoengineered robot to only a small part of the assembly code (using the "broadcast" architecture), particularly when doing self-replication, is one critical way nanotechnology can be engineered to be safer than biology. 

With the advent of full-scale nanotechnology in the 2020s, we will have the potential to replace biology's genetic information repository in the cell nucleus with a nanoengineered system that would maintain the genetic code and simulate the actions of RNA, the ribosome, and other elements of the computer in biology's assembler.  There would be significant benefits in doing this.  We could eliminate the accumulation of DNA transcription errors, one major source of the aging process.  We could introduce DNA changes to essentially reprogram our genes (something we'll be able to do long before this scenario, using gene-therapy techniques). 

With such a nanoengineered system, the recommended broadcast architecture could enable us to turn off unwanted replication, thereby defeating cancer, autoimmune reactions, and other disease processes.  Although most of these disease processes will have already been defeated by genetic engineering, reengineering the computer of life using nanotechnology could eliminate any remaining obstacles and create a level of durability and flexibility that goes vastly beyond the inherent capabilities of biology.

Life's local data storage is, of course, the DNA strands, broken into specific genes on the chromosomes.  The task of instruction-masking (blocking genes that do not contribute to a particular cell type) is controlled by the short RNA molecules and peptides that govern gene expression.  The internal environment the ribosome is able to function in is the particular chemical environment maintained inside the cell, which includes a particular acid-alkaline equilibrium (pH between 6.8 and 7.1 in human cells) and other chemical balances needed for the delicate operations of the ribosome.  The cell wall is responsible for protecting this internal cellular environment from disturbance by the outside world. 

The robot arm tip would use the ribosome's ability to implement enzymatic reactions to break off each amino acid, each bound to a specific transfer RNA, and to connect it to its adjoining amino acid using a peptide bond. 

However, the goal of molecular manufacturing is not merely to replicate the molecular assembly capabilities of biology.  Biological systems are limited to building systems from protein, which has profound limitations in strength and speed.  Nanobots built from diamondoid gears and rotors can be thousands of times faster and stronger than biological cells.  The comparison is even more dramatic with regard to computation: the switching speed of nanotube-based computation would be millions of times faster than the extremely slow transaction speed of the electrochemical switching used in mammalian interneuronal connections (typically around 200 transactions per second, although the nonlinear transactions that take place in the dendrites and synapses are more complex than single computations). 

The concept of a diamondoid assembler described above uses a consistent input material (for construction and fuel).  This is one of several protections against molecule-scale replication of robots in an uncontrolled fashion in the outside world.  Biology's replication robot, the ribosome, also requires carefully controlled source and fuel materials, which are provided by our digestive system.  As nano-based replicators become more sophisticated, more capable of extracting carbon atoms and carbon-based molecular fragments from less well-controlled source materials, and able to operate outside of controlled replicator enclosures such as in the biological world, they will have the potential to present a grave threat to that world, particularly in view of the vastly greater strength and speed of nano-based replicators over any biological system.  This is, of course, the source of great controversy, which is alluded to in the Drexler-Smalley debate article and letters.

In the decade since publication of Drexler's Nanosystems, each aspect of Drexler's conceptual designs has been strengthened through additional design proposals, supercomputer simulations, and, most importantly, actual construction of molecular machines.  Boston College chemistry professor T. Ross Kelly reported in the journal Nature that his construction of a chemically-powered nanomotor was built from 78 atoms.[3]  A biomolecular research group headed by C. D. Montemagno created an ATP-fueled nanomotor.[4]  Another molecule-sized motor fueled by solar energy was created by Ben Feringa at the University of Groningen in the Netherlands out of 58 atoms.[5]  Similar progress has been made on other molecular-scale mechanical components such as gears, rotors, and levers.  Systems demonstrating the use of chemical energy and acoustic energy (as originally described by Drexler) have been designed, simulated, and, in many cases, actually constructed.  Substantial progress has been made in developing various types of electronic components from molecule-scale devices, particularly in the area of carbon nanotubes, an area that Smalley has pioneered. 

Fat and Sticky Fingers

In the wake of rapidly expanding development of each facet of future nanotechnology systems, no serious flaw to Drexler's universal assembler concept has been discovered or described.  Smalley's highly publicized objection in Scientific American [6] was based on a distorted description of the Drexler proposal; it ignored the extensive body of work in the past decade.  As a pioneer of carbon nanotubes, Smalley has gone back and forth between enthusiasm and skepticism, having written that "nanotechnology holds the answer, to the extent there are answers, to most of our pressing material needs in energy, health, communication, transportation, food, water …."

Smalley describes Drexler's assembler as consisting of five to ten "fingers" (manipulator arms) to hold, move, and place each atom in the machine being constructed.  He then goes on to point out that there isn't room for so many fingers in the cramped space that a nanobot assembly robot has to work (which he calls the "fat fingers" problem) and that these fingers would have difficulty letting go of their atomic cargo because of molecular attraction forces (the "sticky fingers" problem).  Smalley describes the "intricate three-dimensional waltz that is carried out" by five to fifteen atoms in a typical chemical reaction.  Drexler's proposal doesn't look anything like the straw man description that Smalley criticizes.  Drexler's proposal, and most of those that have followed, have a single probe, or "finger." 

Moreover, there have been extensive description and analyses of viable tip chemistries that do not involve grasping and placing atoms as if they were mechanical pieces to be deposited in place.  For example, the feasibility of moving hydrogen atoms using Drexler's "propynyl hydrogen abstraction" tip[7] has been extensively confirmed in the intervening years.[8]  The ability of the scanning probe microscope (SPM), developed at IBM in 1981, and the more sophisticated atomic force microscope to place individual atoms through specific reactions of a tip with a molecular-scale structure provide additional existence proofs.  Indeed, if Smalley's critique were valid, none of us would be here to discuss it because life itself would be impossible. 

Smalley also objects that despite "working furiously  . . . generating even a tiny amount of a product would take [a nanobot] … millions of years."  Smalley is correct, of course, that an assembler with only one nanobot wouldn't produce any appreciable quantities of a product.  However, the basic concept of nanotechnology is that we will need trillions of nanobots to accomplish meaningful results.  This is also the source of the safety concerns that have received ample attention.  Creating trillions of nanobots at reasonable cost will require the nanobots to make themselves.  This self-replication solves the economic issue while introducing grave dangers.  Biology used the same solution to create organisms with trillions of cells, and indeed we find that virtually all diseases derive from biology's self-replication process gone awry. 

Earlier challenges to the concepts underlying nanotechnology have also been effectively addressed.  Critics pointed out that nanobots would be subject to bombardment by thermal vibration of nuclei, atoms, and molecules.  This is one reason conceptual designers of nanotechnology have emphasized building structural components from diamondoid or carbon nanotubes.  Increasing the strength or stiffness of a system reduces its susceptibility to thermal effects.  Analysis of these designs have shown them to be thousands of times more stable in the presence of thermal effects than biological systems, so they can operate in a far wider temperature range[9].

Similar challenges were made regarding positional uncertainty from quantum effects, based on the extremely small feature size of nanoengineered devices.    Quantum effects are significant for an electron, but a single carbon atom nucleus is more than 20,000 times more massive than an electron.  A nanobot will be constructed from hundreds of thousands to millions of carbon and other atoms, so a nanobot will be billions of times more massive than an electron.  Plugging this ratio in the fundamental equation for quantum positional uncertainty shows this to be an insignificant factor. 

Power has represented another challenge.  Drexler's original proposals involved glucose-oxygen fuel cells, which have held up well in feasibility studies.  An advantage of the glucose-oxygen approach is that nanomedicine applications can harness the glucose, oxygen, and ATP resources already provided by the human digestive system.  A nanoscale motor was recently created using propellers made of nickel and powered by an ATP-based enzyme.[10] 

However, recent progress in implementing MEMS-scale and even nanoscale hydrogen-oxygen fuel cells have provided an alternative approach.  Hydrogen-oxygen fuel cells, with hydrogen provided by safe methanol fuel, have made substantial progress in recent years.  A small company in Massachusetts, Integrated Fuel Cell Technologies, Inc.[11] has demonstrated a MEMS-based fuel cell.  Each postage-stamp- sized device contains thousands of microscopic fuel cells and includes the fuel lines and electronic controls.  NEC plans to introduce fuel cells based on nanotubes in 2004 for notebook computers and other portable electronics.  They claim their small power sources will power devices for up to 40 hours before the user needs to change the methanol canister. 

The Debate Heats Up

On April 16, 2003, Drexler responded to Smalley's Scientific American article with an open letter.  He cited 20 years of research by himself and others and responded specifically to the fat and sticky fingers objection.  As I discussed above, molecular assemblers were never described as having fingers at all, but rather precise positioning of reactive molecules.  Drexler cited biological enzymes and ribosomes as examples of precise molecular assembly in the natural world.  Drexler closes by quoting Smalley's own observation that "when a scientist says something is possible, they're probably underestimating how long it will take.  But if they say it's impossible, they're probably wrong."

Three more rounds of this debate were published today.  Smalley responds to Drexler's open letter by backing off of his fat and sticky fingers objection and acknowledging that enzymes and ribosomes do indeed engage in the precise molecular assembly that Smalley had earlier indicated was impossible. Smalley says biological enzymes only work in water and that such water-based chemistry is limited to biological structures such as "wood, flesh and bone." As Drexler has stated[12], this is erroneous. Many enzymes, even those that ordinarily work in water, can also function in anhydrous organic solvents and some enzymes can operate on substrates in the vapor phase, with no liquid at all. [13].

Smalley goes on to state (without any derivation or citations) that enzymatic-like reactions can only take place with biological enzymes.  This is also erroneous. It is easy to see why biological evolution adopted water-based chemistry.   Water is the most abundant substance found on our planet.  It also comprises 70 to 90 percent of our bodies, our food, and indeed of all organic matter.  Most people think of water as fairly simple, but it is a far more complex phenomenon than conventional wisdom suggests.

As every grade school child knows, water is comprised of molecules, each containing two atoms of hydrogen and one atom of oxygen, the most commonly known chemical formula, H 2O.  However, consider some of water's complications and their implications.  In a liquid state, the two hydrogen atoms make a 104.5° angle with the oxygen atom, which increases to 109.5° when water freezes.  This is why water molecules are more spread out in the form of ice, providing it with a lower density than liquid water.  This is why ice floats. 

Although the overall water molecule is electrically neutral, the placement of the electrons creates polarization effects.  The side with the hydrogen atoms is relatively positive in electrical charge, whereas the oxygen side is slightly negative.  So water molecules do not exist in isolation, rather they combine with one another in small groups to assume, typically, pentagonal or hexagonal shapes[14].  These multi-molecule structures can change back and forth between hexagonal and pentagonal configurations 100 billion times a second.   At room temperature, only about 3 percent of the clusters are hexagonal, but this increases to 100 percent as the water gets colder.  This is why snowflakes are hexagonal. 

These three-dimensional electrical properties of water are quite powerful and can break apart the strong chemical bonds of other compounds.  Consider what happens when you put salt into water.  Salt is quite stable when dry, but is quickly torn apart into its ionic components when placed in water.  The negatively charged oxygen side of the water molecules attracts positively charged sodium ions (Na+), while the positively charged hydrogen side of the water molecules attracts the negatively charged chlorine ions (Cl-).  In the dry form of salt, the sodium and chlorine atoms are tightly bound together, but these bonds are easily broken by the electrical charge of the water molecules.  Water is considered "the universal solvent" and is involved in most of the biochemical pathways in our bodies.  So we can regard the chemistry of life on our planet primarily as water chemistry. 

However, the primary thrust of our technology has been to develop systems that are not limited to the restrictions of biological evolution, which exclusively adopted water-based chemistry and proteins as its foundation.  Biological systems can fly, but if you want to fly at 30,000 feet and at hundreds or thousands of miles per hour, you would use our modern technology, not proteins.  Biological systems such as human brains can remember things and do calculations, but if you want to do data mining on billions of items of information, you would want to use our electronic technology, not unassisted human brains. 

Smalley is ignoring the past decade of research on alternative means of positioning molecular fragments using precisely guided molecular reactions.  Precisely controlled synthesis of diamondoid (diamond-like material formed into precise patterns) has been extensively studied, including the ability to remove a single hydrogen atom from a hydrogenated diamond surface.[15]  Related research supporting the feasibility of hydrogen abstraction and precisely-guided diamondoid synthesis has been conducted at the Materials and Process Simulation Center at Caltech; the Department of Materials Science and Engineering at North Carolina State University; the Institute for Molecular Manufacturing, the University of Kentucky; the United States Naval Academy, and the Xerox Palo Alto Research Center.[16]

Smalley is also ignoring the well-established scanning probe microscope mentioned above, which uses precisely controlled molecular reactions.  Building on these concepts, Ralph Merkle has described tip reactions that can involve up to four reactants.[17]  There is extensive literature on site-specific reactions that can be precisely guided and that would be feasible for the tip chemistry in a molecular assembler.[18]  Smalley ignores this body of literature when he maintains that only biological enzymes in water can perform this type of reaction.  Recently, many tools that go beyond SPMs are emerging that can reliably manipulate atoms and molecular fragments. 

On September 3, 2003, Drexler responded to Smalley's response by alluding once again to the extensive body of literature that Smalley ignores.  He cites the analogy to a modern factory, only at a nano-scale.  He cites analyses of transition state theory indicating that positional control would be feasible at megahertz frequencies for appropriately selected reactants. 

The latest installment of this debate is a follow-up letter by Smalley.  This letter is short on specifics and science and long on imprecise metaphors that avoid the key issues.  He writes, for example, that "much like you can't make a boy and a girl fall in love with each other simply by pushing them together, you cannot make precise chemistry occur as desired between two molecular objects with simple mechanical motion…cannot be done simply by mushing two molecular objects together."  He again acknowledges that enzymes do in fact accomplish this, but refuses to acknowledge that such reactions could take place outside of a biological-like system: "this is why I led you…..to talk about real chemistry with real enzymes….any such system will need a liquid medium.  For the enzymes we know about, that liquid will have to be water, and the types of things that can be synthesized with water around cannot be much broader than meat and bone of biology."

I can understand Drexler's frustration in this debate because I have had many critics that do not bother to read or understand the data and arguments that I have presented for my own conceptions of future technologies.  Smalley's argument is of the form that "we don't have 'X' today, therefore 'X' is impossible."  I encounter this class of argument repeatedly in the area of artificial intelligence.  Critics will cite the limitations of today's systems as proof that such limitations are inherent and can never be overcome.  These critics ignore the extensive list of contemporary examples of AI (for example, airplanes and weapons that fly and guide themselves, automated diagnosis of electrocardiograms and blood cell images, automated detection of credit card fraud, automated investment programs that routinely outperform human analysts, telephone-based natural language response systems, and hundreds of others) that represent working systems that are commercially available today that were only research programs a decade ago. 

Those of us who attempt to project into the future based on well-grounded methodologies are at a disadvantage.  Certain future realities may be inevitable, but they are not yet manifest, so they are easy to deny.  There was a small body of thought at the beginning of the 20th century that heavier-than-air flight was feasible, but mainstream skeptics could simply point out that if it was so feasible, why had it never been demonstrated?  In 1990, Kasparov scoffed at the idea that machine chess players could ever possibly defeat him.  When it happened in 1997, observers were quick to dismiss the achievement by dismissing the importance of chess. 

Smalley reveals at least part of his motives at the end of his most recent letter when he writes:

"A few weeks ago I gave a talk on nanotechnology and energy titled 'Be a Scientist, Save the World' to about 700 middle and high school students in the Spring Branch ISD, a large public school system here in the Houston area.    Leading up to my visit the students were asked to 'write an essay on 'why I am a Nanogeek.  Hundreds responded, and I had the privilege of reading the top 30 essays, picking my favorite top 5. Of the essays I read, nearly half assumed that self-replicating nanobots were possible, and most were deeply worried about what would happen in their future as these nanobots spread around the world.    I did what I could to allay their fears, but there is no question that many of these youngsters have been told a bedtime story that is deeply troubling. You and people around you have scared our children."

I would point out to Smalley that earlier critics also expressed skepticism that either world-wide communication networks or software viruses that would spread across them were feasible.  Today, we have both the benefits and the damage from both of these capabilities.  However, along with the danger of software viruses has also emerged a technological immune system.  While it does not completely protect us, few people would advocate eliminating the Internet in order to eliminate software viruses.  We are obtaining far more benefit than damage from this latest example of intertwined promise and peril. 

Smalley's approach to reassuring the public about the potential abuse of this future technology is not the right strategy.  Denying the feasibility of both the promise and the peril of molecular assembly will ultimately backfire and fail to guide research in the needed constructive direction.  By the 2020s, molecular assembly will provide tools to effectively combat poverty, clean up our environment, overcome disease, extend human longevity, and many other worthwhile pursuits. 

Like every other technology that humankind has created, it can also be used to amplify and enable our destructive side.  It is important that we approach this technology in a knowledgeable manner to gain the profound benefits it promises, while avoiding its dangers.  Drexler and his colleagues at the Foresight Institute have been in the forefront of developing the ethical guidelines and design considerations needed to guide the technology in a safe and constructive direction. 

Denying the feasibility of an impending technological transformation is a short-sighted strategy. 

Notes

[1] Chemical & Engineering News, December 1, 2003

[2] Ralph C. Merkle, "A proposed 'metabolism' for a hydrocarbon assembler," Nanotechnology 8 (1997): 149-162; http://www.zyvex.com/nanotech/hydroCarbonMetabolism.html.

[3] T.R. Kelly, H. De Silva, R.A. Silva, "Unidirectional rotary motion in a molecular system," Nature 401 (September 9, 1999): 150-152.

[4] C.D. Montemagno, G.D. Bachan, "Constructing nanomechanical devices powered by biomolecular motors," Nanotechnology 10 (1999): 225-231; G. D. Bachand, C.D. Montemagno, "Constructing organic / inorganic NEMS devices powered by biomolecular motors," Biomedical Microdevices 2 (2000): 179-184.

[5] N. Koumura, R.W. Zijlstra, R.A. van Delden, N. Harada, B.L. Feringa, "Light-driven monodirectional molecular rotor," Nature 401 (September 9, 1999): 152-155.

[6] Richard E. Smalley, "Of chemistry, love, and nanobots," Scientific American 285 (September, 2001): 76-77.  http://smalley.rice.edu/rick's%20publications/SA285-76.pdf.

[7] Nanosystems: molecular machinery, manufacturing, and computation, by K. Eric Drexler, Wiley 1992.

[8] See for example, Theoretical Studies of a Hydrogen Abstraction Tool for Nanotechnology, by Charles B. Musgrave, Jason K. Perry, Ralph C. Merkle, and William A. Goddard III, Nanotechnology 2, 1991 pages 187-195.

[9] See equation and explanation on page 3 of "That's Impossible!" How good scientists reach bad conclusions by Ralph C. Merkle, http://www.zyvex.com/nanotech/impossible.html. 

[10] Montemagno, C., and Bachand G.  1999 Nanotechnology 10 225.

[11] By way of disclosure, the author is an advisor and investor in this company.

[12]  Chemical & Engineering News, December 1, 2003

[13] A. Zaks and A.M. Klibanov in Science (1984, 224:1249-51)

[14] "The apparent simplicity of the water molecule belies the enormous complexity of its interactions with other molecules, including other water molecules" (A. Soper. 2002. "Water and ice." Science 297: 1288-1289). There is much that is still up for debate, as shown by the numerous articles still being published about this most basic of molecules, H20. For example, D. Klug. 2001. "Glassy water." Science 294:2305-2306; P. Geissler et al., 2001. "Autoionization in liquid water." Science 291(5511):2121-2124; J.K. Gregory et al. 1997. "The water dipole moment in water clusters." Science 275:814-817; and K. Liu et al. 1996. "Water clusters." Science 271:929-933;

A water molecule has slightly negative and slightly positive ends, which means water molecules interact with other water molecules to form networks. The partially positive hydrogen atom on one molecule is attracted to the partially negative oxygen on a neighboring molecule (hydrogen bonding). Three-dimensional hexamers involving 6 molecules are thought to be particularly stable, though none of these clusters lasts longer than a few picoseconds.

The polarity of water results in a number of anomalous properties. One of the best known is that the solid phase (ice) is less dense than the liquid phase. This is because the volume of water varies with the temperature, and the volume increases by about 9% on freezing. Due to hydrogen bonding, water also has a higher-than-expected boiling point.

[15] http://www.foresight.org/SciAmDebate/SciAmResponse.html, http://www.imm.org/SciAmDebate2/smalley.html, http://www.rfreitas.com/Nano/DimerTool.htm.

[16] The analysis of the hydrogen abstraction tool has involved many people, including: Donald W. Brenner, Richard J. Colton, K. Eric Drexler, William A. Goddard, III, J. A. Harrison, Jason K. Perry, Ralph C. Merkle, Charles B. Musgrave, O. A. Shenderova, Susan B. Sinnott, and Carter T. White.

[17] Ralph C. Merkle, "A proposed 'metabolism' for a hydrocarbon assembler," Nanotechnology 8(1997):149-162; http://www.zyvex.com/nanotech/hydroCarbonMetabolism.html

[18] Wilson Ho, Hyojune Lee, "Single bond formation and characterization with a scanning tunneling microscope," Science 286(26 November 1999):1719-1722; http://www.physics.uci.edu/~wilsonho/stm-iets.html.

K. Eric Drexler, Nanosystems: Molecular Machinery, Manufacturing, and Computation, John Wiley & Sons, New York, 1992, Chapter 8.

Ralph C. Merkle, "A proposed 'metabolism' for a hydrocarbon assembler," Nanotechnology 8(1997):149-162; http://www.zyvex.com/nanotech/hydroCarbonMetabolism.html.

Charles B. Musgrave, Jason K. Perry, Ralph C. Merkle, William A. Goddard III, "Theoretical studies of a hydrogen abstraction tool for nanotechnology," Nanotechnology 2(1991):187-195; http://www.zyvex.com/nanotech/Habs/Habs.html.

Michael Page, Donald W. Brenner, "Hydrogen abstraction from a diamond surface: Ab initio quantum chemical study using constrained isobutane as a model," J. Am. Chem. Soc. 113(1991):3270-3274.

Susan B. Sinnott, Richard J. Colton, Carter T. White, Donald W. Brenner, "Surface patterning by atomically-controlled chemical forces: molecular dynamics simulations," Surf. Sci. 316(1994):L1055-L1060.

D.W. Brenner, S.B. Sinnott, J.A. Harrison, O.A. Shenderova, "Simulated engineering of nanostructures," Nanotechnology 7(1996):161-167; http://www.zyvex.com/nanotech/nano4/brennerPaper.pdf

S.P. Walch, W.A. Goddard III, R.C. Merkle, "Theoretical studies of reactions on diamond surfaces," Fifth Foresight Conference on Molecular Nanotechnology, 1997; http://www.foresight.org/Conferences/MNT05/Abstracts/Walcabst.html.

Stephen P. Walch, Ralph C. Merkle, "Theoretical studies of diamond mechanosynthesis reactions," Nanotechnology 9(1998):285-296.

Fedor N. Dzegilenko, Deepak Srivastava, Subhash Saini, "Simulations of carbon nanotube tip assisted mechano-chemical reactions on a diamond surface," Nanotechnology 9(December 1998):325-330.

J.W. Lyding, K. Hess, G.C. Abeln, D.S. Thompson, J.S. Moore, M.C. Hersam, E.T. Foley, J. Lee, Z. Chen, S.T. Hwang, H. Choi, P.H. Avouris, I.C. Kizilyalli, "UHV-STM nanofabrication and hydrogen/deuterium desorption from silicon surfaces: implications for CMOS technology," Appl. Surf. Sci. 130(1998):221-230.

E.T. Foley, A.F. Kam, J.W. Lyding, P.H. Avouris, P. H. (1998), "Cryogenic UHV-STM study of hydrogen and deuterium desorption from Si(100)," Phys. Rev. Lett. 80(1998):1336-1339.

M.C. Hersam, G.C. Abeln, J.W. Lyding, "An approach for efficiently locating and electrically contacting nanostructures fabricated via UHV-STM lithography on Si(100)," Microelectronic Engineering 47(1999):235-.

L.J. Lauhon, W. Ho, "Inducing and observing the abstraction of a single hydrogen atom in bimolecular reaction with a scanning tunneling microscope," J. Phys. Chem. 105(2000):3987-3992.

Ralph C. Merkle, Robert A. Freitas Jr., “Theoretical analysis of a carbon-carbon dimer placement tool for diamond mechanosynthesis,” J. Nanosci. Nanotechnol. 3(August 2003):319-324. http://www.rfreitas.com/Nano/JNNDimerTool.pdf

Jingping Peng, Robert A. Freitas Jr., Ralph C. Merkle, “Theoretical Analysis of Diamond Mechanosynthesis. Part I. Stability of C2 Mediated Growth of Nanocrystalline Diamond C(110) Surface,” J. Comp. Theor. Nanosci. 1(March 2004). In press.

David J. Mann, Jingping Peng, Robert A. Freitas Jr., Ralph C. Merkle, “Theoretical Analysis of Diamond Mechanosynthesis. Part II. C2 Mediated Growth of Diamond C(110) Surface via Si/Ge-Triadamantane Dimer Placement Tools,” J. Comp. Theor. Nanosci. 1(March 2004). In press.

© 2003 KurzweilAI.net

 

 

 

 

KurzweilAI.net, Dec. 16, 2003

Foresight Chairman Dr. K. Eric Drexler submitted a letter to the New York Times editor protesting their framing of the Drexler-Smalley debate.

"The Times elected to edit the letter (and apparently omit Mike Treder's separate letter), discarding a key quote from the article, and modifying the last sentence," says Drexler.

The letter, to be published tomorrow (Dec. 16, 2003) in The New York Times, reads (omitted text shown in italics):

Nanobots, Real or Imagined

To the Editor:

Re "Yes, They Can! No, They Can't: Charges Fly in Nanobot Debate" (Dec. 9): The article says the nanotechnology debate is about "whether it is possible to build a nanobot." This ignores the central issue — the feasibility of molecular manufacturing — in which nanobots play no role. Indeed, the article quotes my statement that nanofactories will use "no swarms of roaming, replicating nanobots."

The article neglects critical policy and security issues. Molecular machinery will increase manufacturing productivity a millionfold, yet our national nanotechnology effort now excludes work toward this goal. In a competitive world, continuing this policy would amount to unilateral disarmament.

Focusing on imaginary nanobots may appeal to a fraction of your readers, but it leaves the serious science and policy issues unexamined.

DR. K. ERIC DREXLER
Los Altos, Calif.
 

 

 

http://www.kurzweilai.net/meme/frame.html?main=/articles/art0556.html

Testimony of Ray Kurzweil on the Societal Implications of Nanotechnology
 
by   Ray Kurzweil
 


 

Despite calls to relinquish research in nanotechnology, we will have no choice but to confront the challenge of guiding nanotechnology in a constructive direction. Advances in nanotechnology and related advanced technologies are inevitable. Any broad attempt to relinquish nanotechnology will only push it underground, which would interfere with the benefits while actually making the dangers worse.



 

Testimony presented April 9, 2003 at the Committee on Science, U.S. House of Representatives Hearing to examine the societal implications of nanotechnology and consider H.R. 766, The Nanotechnology Research and Development Act of 2003.

Summary of Testimony:

The size of technology is itself inexorably shrinking.  According to my models, both electronic and mechanical technologies are shrinking at a rate of 5.6 per linear dimension per decade.  At this rate, most of technology will be "nanotechnology" by the 2020s.

We are immeasurably better off as a result of technology, but there is still a lot of suffering in the world to overcome.  We have a moral imperative, therefore, to continue the pursuit of knowledge and advanced technologies, such as nanotechnology, that can continue to overcome human affliction.  There is also an economic imperative to continue due to the pervasive acceleration of technology, including miniaturization, in the competitive economy.

Nanotechnology is not a separate field of study that we can simply relinquish.  We will have no choice but to confront the challenge of guiding nanotechnology in a constructive direction.  There are strategies we can deploy, but there will need to be continual development of defensive strategies. 

We can take some level of comfort from our relative success in dealing with one new form of fully non-biological, self-replicating pathogen: the software virus

The most immediate danger is not self-replicating nanotechnology, but rather self-replicating biotechnology.  We need to place a much higher priority on developing vitally needed defensive technologies such as antiviral medications.  Keep in mind that a bioterrorist does not need to put his "innovations" through the FDA. 

Any broad attempt to relinquish nanotechnology will only push it underground, which would interfere with the benefits while actually making the dangers worse.

Existing regulations on the safety of foods, drugs, and other materials in the environment are sufficient to deal with the near-term applications of nanotechnology, such as nanoparticles.

Full Verbal Testimony:

Chairman Boehlert, distinguished members of the U.S. House of Representatives Committee on Science, and other distinguished guests, I appreciate this opportunity to respond to your questions and concerns on the vital issue of the societal implications of nanotechnology.  Our rapidly growing ability to manipulate matter and energy at ever smaller scales promises to transform virtually every sector of society, including health and medicine, manufacturing, electronics and computers, energy, travel, and defense.  There will be increasing overlap between nanotechnology and other technologies of increasing influence, such as biotechnology and artificial intelligence.  As with any other technological transformation, we will be faced with deeply intertwined promise and peril.

In my brief verbal remarks, I only have time to summarize my conclusions on this complex subject, and I am providing the Committee with an expanded written response that attempts to explain the reasoning behind my views. 

Eric Drexler's 1986 thesis developed the concept of building molecule-scale devices using molecular assemblers that would precisely guide chemical reactions.  Without going through the history of the controversy surrounding feasibility, it is fair to say that the consensus today is that nano-assembly is indeed feasible, although the most dramatic capabilities are still a couple of decades away.

 The concept of nanotechnology today has been expanded to include essentially any technology where the key features are measured in a modest number of nanometers (under 100 by some definitions).  By this standard, contemporary electronics has already passed this threshold. 

For the past two decades, I have studied technology trends, along with a team of researchers who have assisted me in gathering critical measures of technology in different areas, and I have been developing mathematical models of how technology evolves.  Several conclusions from this study have a direct bearing on the issues before this hearing.  Technologies, particularly those related to information, develop at an exponential pace, generally doubling in capability and price-performance every year.  This observation includes the power of computation, communication – both wired and wireless, DNA sequencing, brain scanning, brain reverse engineering, and the size and scope of human knowledge in general.  Of particular relevance to this hearing, the size of technology is itself inexorably shrinking.  According to my models, both electronic and mechanical technologies are shrinking at a rate of 5.6 per linear dimension per decade.  At this rate, most of technology will be "nanotechnology" by the 2020s. 

The golden age of nanotechnology is, therefore, a couple of decades away.  This era will bring us the ability to essentially convert software, i.e., information, directly into physical products.  We will be able to produce virtually any product for pennies per pound.  Computers will have greater computational capacity than the human brain, and we will be completing the reverse engineering of the human brain to reveal the software design of human intelligence.  We are already placing devices with narrow intelligence in our bodies for diagnostic and therapeutic purposes.  With the advent of nanotechnology, we will be able to keep our bodies and brains in a healthy, optimal state indefinitely.  We will have technologies to reverse environmental pollution.  Nanotechnology and related advanced technologies of the 2020s will bring us the opportunity to overcome age-old problems, including pollution, poverty, disease, and aging. 

We hear increasingly strident voices that object to the intermingling of the so-called natural world with the products of our technology.  The increasing intimacy of our human lives with our technology is not a new story, and I would remind the committee that had it not been for the technological advances of the past two centuries, most of us here today would not be here today. Human life expectancy was 37 years in 1800.  Most humans at that time lived lives dominated by poverty, intense labor, disease, and misfortune.  We are immeasurably better off as a result of technology, but there is still a lot of suffering in the world to overcome.  We have a moral imperative, therefore, to continue the pursuit of knowledge and of advanced technologies that can continue to overcome human affliction.

There is also an economic imperative to continue.   Nanotechnology is not a single field of study that we can simply relinquish, as suggested by Bill Joy's essay, "Why the Future Doesn't Need Us."  Nanotechnology is advancing on hundreds of fronts, and is an extremely diverse activity.  We cannot relinquish its pursuit without essentially relinquishing all of technology, which would require a Brave New World totalitarian scenario, which is inconsistent with the values of our society

Technology has always been a double-edged sword, and that is certainly true of nanotechnology.  The same technology that promises to advance human health and wealth also has the potential for destructive applications.  We can see that duality today in biotechnology.  The same techniques that could save millions of lives from cancer and disease may also empower a bioterrorist to create a bioengineered pathogen

A lot of attention has been paid to the problem of self-replicating nanotechnology entities that could essentially form a nonbiological cancer that would threaten the planet. I discuss in my written testimony steps we can take now and in the future to ameliorate these dangers. However, the primary point I would like to make is that we will have no choice but to confront the challenge of guiding nanotechnology in a constructive direction.  Any broad attempt to relinquish nanotechnology will only push it underground, which would interfere with the benefits while actually making the dangers worse. 

As a test case, we can take a small measure of comfort from how we have dealt with one recent technological challenge. There exists today a new form of fully nonbiological self-replicating entity that didn't exist just a few decades ago: the computer virus.  When this form of destructive intruder first appeared, strong concerns were voiced that as they became more sophisticated, software pathogens had the potential to destroy the computer network medium they live in. Yet the "immune system" that has evolved in response to this challenge has been largely effective. Although destructive self-replicating software entities do cause damage from time to time, the injury is but a small fraction of the benefit we receive from the computers and communication links that harbor them. No one would suggest we do away with computers, local area networks, and the Internet because of software viruses. 

One might counter that computer viruses do not have the lethal potential of biological viruses or of destructive nanotechnology. This is not always the case: we rely on software to monitor patients in critical care units, to fly and land airplanes, to guide intelligent weapons in our current campaign in Iraq, and other "mission critical" tasks. To the extent that this is true, however, this observation only strengthens my argument.  The fact that computer viruses are not usually deadly to humans only means that more people are willing to create and release them.  It also means that our response to the danger is that much less intense.  Conversely, when it comes to self-replicating entities that are potentially lethal on a large scale, our response on all levels will be vastly more serious, as we have seen since 9-11. 

I would describe our response to software pathogens as effective and successful.  Although they remain (and always will remain) a concern, the danger remains at a nuisance level.  Keep in mind that this success is in an industry in which there is no regulation, and no certification for practitioners.  This largely unregulated industry is also enormously productive.  One could argue that it has contributed more to our technological and economic progress than any other enterprise in human history.  

Some of the concerns that have been raised, such as Bill Joy's article, are effective because they paint a picture of future dangers as if they were released on today's unprepared world.  The reality is that the sophistication and power of our defensive technologies and knowledge will grow along with the dangers. 

The challenge most immediately in front of us is not self-replicating nanotechnology, but rather self-replicating biotechnology.  The next two decades will be the golden age of biotechnology, whereas the comparable era for nanotechnology will follow in the 2020s and beyond.  We are now in the early stages of a transforming technology based on the intersection of biology and information science.  We are learning the "software" methods of life and disease processes.  By reprogramming the information processes that lead to and encourage disease and aging, we will have the ability to overcome these afflictions.  However, the same knowledge can also empower a terrorist to create a bioengineered pathogen

As we compare the success we have had in controlling engineered software viruses to the coming challenge of controlling engineered biological viruses, we are struck with one salient difference.  As I noted, the software industry is almost completely unregulated.  The same is obviously not the case for biotechnologyA bioterrorist does not need to put his "innovations" through the FDA.  However, we do require the scientists developing the defensive technologies to follow the existing regulations, which slow down the innovation process at every step.  Moreover, it is impossible, under existing regulations and ethical standards, to test defenses to bioterrorist agents on humans.  There is already extensive discussion to modify these regulations to allow for animal models and simulations to replace infeasible human trials.  This will be necessary, but I believe we will need to go beyond these steps to accelerate the development of vitally needed defensive technologies. 

With the human genome project, 3 to 5 percent of the budgets were devoted to the ethical, legal, and social implications (ELSI) of the technology.  A similar commitment for nanotechnology would be appropriate and constructive. 

Near-term applications of nanotechnology are far more limited in their benefits as well as more benign in their potential dangers.  These include developments in the materials area involving the addition of particles with multi-nanometer features to plastics, textiles, and other products.  These have perhaps the greatest potential in the area of pharmaceutical development by allowing new strategies for highly targeted drugs that perform their intended function and reach the appropriate tissues, while minimizing side effects.  This development is not qualitatively different than what we have been doing for decades in that many new materials involve constituent particles that are novel and of a similar physical scale.  The emerging nanoparticle technology provides more precise control, but the idea of introducing new nonbiological materials into the environment is hardly a new phenomenon.  We cannot say a priori that all nanoengineered particles are safe, nor would it be appropriate to deem them necessarily unsafe.  Environmental tests thus far have not shown reasons for undue concern, and it is my view that existing regulations on the safety of foods, drugs, and other materials in the environment are sufficient to deal with these near-term applications. 

The voices that are expressing concern about nanotechnology are the same voices that have expressed undue levels of concern about genetically modified organisms.  As with nanoparticles, GMO's are neither inherently safe nor unsafe, and reasonable levels of regulation for safety are appropriate.  However, none of the dire warnings about GMO's have come to pass.  Already, African nations, such as Zambia and Zimbabwe, have rejected vitally needed food aid under pressure from European anti-GMO activists.  The reflexive anti-technology stance that has been reflected in the GMO controversy will not be helpful in balancing the benefits and risks of nanoparticle technology

In summary, I believe that existing regulatory mechanisms are sufficient to handle near-term applications of nanotechnology.  As for the long term, we need to appreciate that a myriad of nanoscale technologies are inevitable.  The current examinations and dialogues on achieving the promise while ameliorating the peril are appropriate and will deserve sharply increased attention as we get closer to realizing these revolutionary technologies. 

Written Testimony

I am pleased to provide a more detailed written response to the issues raised by the committee.  In this written portion of my response, I address the following issues:

 

Models of Technology Trends

A diverse technology such as nanotechnology progresses on many fronts and is comprised of hundreds of small steps forward, each benign in itself.  An examination of these trends shows that technology in which the key features are measured in a small number of nanometers is inevitable.  I hereby provide some examples of my study of technology trends. 

The motivation for this study came from my interest in inventing.  As an inventor in the 1970s, I came to realize that my inventions needed to make sense in terms of the enabling technologies and market forces that would exist when the invention was introduced, which would represent a very different world than when it was conceived.  I began to develop models of how distinct technologies – electronics, communications, computer processors, memory, magnetic storage, and the size of technology – developed and how these changes rippled through markets and ultimately our social institutions.   I realized that most inventions fail not because they never work, but because their timing is wrong.  Inventing is a lot like surfing, you have to anticipate and catch the wave at just the right moment. 

In the 1980s, my interest in technology trends and implications took on a life of its own, and I began to use my models of technology trends to project and anticipate the technologies of future times, such as the year 2000, 2010, 2020, and beyond.  This enabled me to invent with the capabilities of the future.  In the late 1980s, I wrote my first book, The Age of Intelligent Machines, which ended with the specter of machine intelligence becoming indistinguishable from its human progenitors.  This book included hundreds of predictions about the 1990s and early 2000 years, and my track record of prediction has held up well. 

During the 1990s I gathered empirical data on the apparent acceleration of all information-related technologies and sought to refine the mathematical models underlying these observations.  In The Age of Spiritual Machines (ASM), which I wrote in 1998, I introduced refined models of technology, and a theory I called "the law of accelerating returns," which explained why technology evolves in an exponential fashion. 

The Intuitive Linear View versus the Historical Exponential View

The future is widely misunderstood.  Our forebears expected the future to be pretty much like their present, which had been pretty much like their past.  Although exponential trends did exist a thousand years ago, they were at that very early stage where an exponential trend is so flat and so slow that it looks like no trend at all.  So their lack of expectations was largely fulfilled.  Today, in accordance with the common wisdom, everyone expects continuous technological progress and the social repercussions that follow.  But the future will nonetheless be far more surprising than most observers realize because few have truly internalized the implications of the fact that the rate of change itself is accelerating. 

Most long-range forecasts of technical feasibility in future time periods dramatically underestimate the power of future developments because they are based on what I call the "intuitive linear" view of history rather than the "historical exponential view."  To express this another way, it is not the case that we will experience a hundred years of progress in the twenty-first century; rather we will witness on the order of twenty thousand years of progress (at today's rate of progress, that is).

When people think of a future period, they intuitively assume that the current rate of progress will continue for future periods.  Even for those who have been around long enough to experience how the pace increases over time, an unexamined intuition nonetheless provides the impression that progress changes at the rate that we have experienced recently.  From the mathematician's perspective, a primary reason for this is that an exponential curve approximates a straight line when viewed for a brief duration.  It is typical, therefore, that even sophisticated commentators, when considering the future, extrapolate the current pace of change over the next 10 years or 100 years to determine their expectations.  This is why I call this way of looking at the future the "intuitive linear" view. 

But a serious assessment of the history of technology shows that technological change is exponential.  In exponential growth, we find that a key measurement such as computational power is multiplied by a constant factor for each unit of time (e.g., doubling every year) rather than just being added to incrementally.  Exponential growth is a feature of any evolutionary process, of which technology is a primary example.  One can examine the data in different ways, on different time scales, and for a wide variety of technologies ranging from electronic to biological, as well as social implications ranging from the size of the economy to human life span, and the acceleration of progress and growth applies.  Indeed, we find not just simple exponential growth, but "double" exponential growth, meaning that the rate of exponential growth is itself growing exponentially.  These observations do not rely merely on an assumption of the continuation of Moore's law (i.e., the exponential shrinking of transistor sizes on an integrated circuit), but is based on a rich model of diverse technological processes.  What it clearly shows is that technology, particularly the pace of technological change, advances (at least) exponentially, not linearly, and has been doing so since the advent of technology, indeed since the advent of evolution on Earth.

Many scientists and engineers have what my colleague Lucas Hendrich calls "engineer's pessimism."  Often an engineer or scientist who is so immersed in the difficulties and intricate details of a contemporary challenge fails to appreciate the ultimate long-term implications of their own work, and, in particular, the larger field of work that they operate in.  Consider the biochemists in 1985 who were skeptical of the announcement of the goal of transcribing the entire genome in a mere 15 years.  These scientists had just spent an entire year transcribing a mere one ten-thousandth of the genome, so even with reasonable anticipated advances, it seemed to them like it would be hundreds of years, if not longer, before the entire genome could be sequenced.  Or consider the skepticism expressed in the mid 1980s that the Internet would ever be a significant phenomenon, given that it included only tens of thousands of nodes.  The fact that the number of nodes was doubling every year and there were, therefore, likely to be tens of millions of nodes ten years later was not appreciated by those who struggled with "state of the art" technology in 1985, which permitted adding only a few thousand nodes throughout the world in a year.

I emphasize this point because it is the most important failure that would-be prognosticators make in considering future trends.  The vast majority of technology forecasts and forecasters ignore altogether this "historical exponential view" of technological progress.  Indeed, almost everyone I meet has a linear view of the future.  That is why people tend to overestimate what can be achieved in the short term (because we tend to leave out necessary details), but underestimate what can be achieved in the long term (because the exponential growth is ignored). 

The Law of Accelerating Returns

The ongoing acceleration of technology is the implication and inevitable result of what I call the "law of accelerating returns," which describes the acceleration of the pace and the exponential growth of the products of an evolutionary process. This includes technology, particularly information-bearing technologies, such as computation.  More specifically, the law of accelerating returns states the following:

If we apply these principles at the highest level of evolution on Earth, the first step, the creation of cells, introduced the paradigm of biology.  The subsequent emergence of DNA provided a digital method to record the results of evolutionary experiments.  Then, the evolution of a species that combined rational thought with an opposable appendage (the thumb) caused a fundamental paradigm shift from biology to technology.  The upcoming primary paradigm shift will be from biological thinking to a hybrid combining biological and nonbiological thinking.  This hybrid will include "biologically inspired" processes resulting from the reverse engineering of biological brains.

If we examine the timing of these steps, we see that the process has continuously accelerated.  The evolution of life forms required billions of years for the first steps (e.g., primitive cells); later on progress accelerated.  During the Cambrian explosion, major paradigm shifts took only tens of millions of years.  Later on, Humanoids developed over a period of millions of years, and Homo sapiens over a period of only hundreds of thousands of years. 

With the advent of a technology-creating species, the exponential pace became too fast for evolution through DNA-guided protein synthesis and moved on to human-created technology Technology goes beyond mere tool making; it is a process of creating ever more powerful technology using the tools from the previous round of innovation, and is, thereby, an evolutionary process.  The first technological steps  -- sharp edges, fire, the wheel – took tens of thousands of years.  For people living in this era, there was little noticeable technological change in even a thousand years.  By 1000 AD, progress was much faster and a paradigm shift required only a century or two.  In the nineteenth century, we saw more technological change than in the nine centuries preceding it.  Then in the first twenty years of the twentieth century, we saw more advancement than in all of the nineteenth century.  Now, paradigm shifts occur in only a few years time.  The World Wide Web did not exist in anything like its present form just a few years ago; it didn't exist at all a decade ago.

The paradigm shift rate (i.e., the overall rate of technical progress) is currently doubling (approximately) every decade; that is, paradigm shift times are halving every decade (and the rate of acceleration is itself growing exponentially).  So, the technological progress in the twenty-first century will be equivalent to what would require (in the linear view) on the order of 200 centuries.  In contrast, the twentieth century saw only about 20 years of progress (again at today's rate of progress) since we have been speeding up to current rates.  So the twenty-first century will see about a thousand times greater technological change than its predecessor. 

Moore's Law and Beyond

There is a wide range of technologies that are subject to the law of accelerating returns.  The exponential trend that has gained the greatest public recognition has become known as "Moore's Law." Gordon Moore, one of the inventors of integrated circuits, and then Chairman of Intel, noted in the mid-1970s that we could squeeze twice as many transistors on an integrated circuit every 24 months.  Given that the electrons have less distance to travel, the circuits also run twice as fast, providing an overall quadrupling of computational power.

However, the exponential growth of computing is much broader than Moore's Law

If we plot the speed (in instructions per second) per $1000 (in constant dollars) of 49 famous calculators and computers spanning the entire twentieth century, we note that there were four completely different paradigms that provided exponential growth in the price-performance of computing before the integrated circuits were invented.  Therefore, Moore's Law was not the first, but the fifth paradigm to exponentially grow the power of computation.  And it won't be the last.  When Moore's Law reaches the end of its S-Curve, now expected before 2020, the exponential growth will continue with three-dimensional molecular computing, a prime example of the application of nanotechnology, which will constitute the sixth paradigm

When I suggested in my book The Age of Spiritual Machines, published in 1999, that three-dimensional molecular computing, particularly an approach based on using carbon nanotubes, would become the dominant computing hardware technology in the teen years of this century, that was considered a radical notion.  There has been so much progress in the past four years, with literally dozens of major milestones having been achieved, that this expectation is now a mainstream view. 

Moore's Law Was Not the First, but the Fifth Paradigm to Provide Exponential Growth of Computing. Each time one paradigm runs out of steam, another picks up the pace

The exponential growth of computing is a marvelous quantitative example of the exponentially growing returns from an evolutionary process.  We can express the exponential growth of computing in terms of an accelerating pace: it took 90 years to achieve the first MIPS (million instructions per second) per thousand dollars; now we add one MIPS per thousand dollars every day. 

Moore's Law narrowly refers to the number of transistors on an integrated circuit of fixed size, and sometimes has been expressed even more narrowly in terms of transistor feature size.  But rather than feature size (which is only one contributing factor), or even number of transistors, I think the most appropriate measure to track is computational speed per unit cost.  This takes into account many levels of "cleverness" (i.e., innovation, which is to say, technological evolution).  In addition to all of the innovation in integrated circuits, there are multiple layers of innovation in computer design, e.g., pipelining, parallel processing, instruction look-ahead, instruction and memory caching, and many others. 

The human brain uses a very inefficient electrochemical digital-controlled analog computational process.  The bulk of the calculations are done in the interneuronal connections at a speed of only about 200 calculations per second (in each connection), which is about ten million times slower than contemporary electronic circuits.  But the brain gains its prodigious powers from its extremely parallel organization in three dimensions.  There are many technologies in the wings that build circuitry in three dimensions.  Nanotubes, an example of nanotechnology, which is already working in laboratories, build circuits from pentagonal arrays of carbon atoms.  One cubic inch of nanotube circuitry would be a million times more powerful than the human brain.  There are more than enough new computing technologies now being researched, including three-dimensional silicon chips, optical and silicon spin computing, crystalline computing, DNA computing, and quantum computing, to keep the law of accelerating returns as applied to computation going for a long time

As I discussed above, it is important to distinguish between the "S" curve (an "S" stretched to the right, comprising very slow, virtually unnoticeable growth – followed by very rapid growth – followed by a flattening out as the process approaches an asymptote) that is characteristic of any specific technological paradigm and the continuing exponential growth that is characteristic of the ongoing evolutionary process of technology.  Specific paradigms, such as Moore's Law, do ultimately reach levels at which exponential growth is no longer feasible.  That is why Moore's Law is an S curve.  But the growth of computation is an ongoing exponential (at least until we "saturate" the Universe with the intelligence of our human-machine civilization, but that will not be a limit in this coming century).  In accordance with the law of accelerating returns, paradigm shift, also called innovation, turns the S curve of any specific paradigm into a continuing exponential. A new paradigm (e.g., three-dimensional circuits) takes over when the old paradigm approaches its natural limit, which has already happened at least four times in the history of computation.  This difference also distinguishes the tool making of non-human species, in which the mastery of a tool-making (or using) skill by each animal is characterized by an abruptly ending S shaped learning curve, versus human-created technology, which has followed an exponential pattern of growth and acceleration since its inception. 

DNA Sequencing, Memory, Communications, the Internet, and Miniaturization

This "law of accelerating returns" applies to all of technology, indeed to any true evolutionary process, and can be measured with remarkable precision in information-based technologies.  There are a great many examples of the exponential growth implied by the law of accelerating returns in technologies, as varied as DNA sequencing, communication speeds, brain scanning, electronics of all kinds, and even in the rapidly shrinking size of technology, which is directly relevant to the discussion at this hearing.  The future nanotechnology age results not from the exponential explosion of computation alone, but rather from the interplay and myriad synergies that will result from manifold intertwined technological revolutions.  Also, keep in mind that every point on the exponential growth curves underlying these panoply of technologies (see the graphs below) represents an intense human drama of innovation and competition.  It is remarkable therefore that these chaotic processes result in such smooth and predictable exponential trends. 

As I noted above, when the human genome scan started fourteen years ago, critics pointed out that given the speed with which the genome could then be scanned, it would take thousands of years to finish the project.  Yet the fifteen year project was nonetheless completed slightly ahead of schedule. 

Of course, we expect to see exponential growth in electronic memories such as RAM.

Notice How Exponential Growth Continued through Paradigm Shifts from Vacuum Tubes to Discrete Transistors to Integrated Circuits

However, growth in magnetic memory is not primarily a matter of Moore's law, but includes advances in mechanical and electromagnetic systems.

Exponential growth in communications technology has been even more explosive than in computation and is no less significant in its implications.  Again, this progression involves far more than just shrinking transistors on an integrated circuit, but includes accelerating advances in fiber optics, optical switching, electromagnetic technologies, and others.

Notice Cascade of "S" Curves

Note that in the above chart we can actually see the progression of "S" curves: the acceleration fostered by a new paradigm, followed by a leveling off as the paradigm runs out of steam, followed by renewed acceleration through paradigm shift.  

The following two charts show the overall growth of the Internet based on the number of hosts (server computers).  These two charts plot the same data, but one is on an exponential axis and the other is linear.  As I pointed out earlier, whereas technology progresses in the exponential domain, we experience it in the linear domain.  So from the perspective of most observers, nothing was happening until the mid 1990s when seemingly out of nowhere, the World Wide Web and email exploded into view.  But the emergence of the Internet into a worldwide phenomenon was readily predictable much earlier by examining the exponential trend data

Notice how the explosion of the Internet appears to be a surprise from the Linear Chart, but was perfectly predictable from the Exponential Chart

The most relevant trend to this hearing, and one that will have profound implications for the twenty-first century is the pervasive trend towards making things smaller, i.e., miniaturization.  The salient implementation sizes of a broad range of technologies, both electronic and mechanical, are shrinking, also at a double-exponential rate.  At present, we are shrinking technology by a factor of approximately 5.6 per linear dimension per decade. 

A Small Sample of Examples of True Nanotechnology

Ubiquitous nanotechnology is two to three decades away.  A prime example of its application will be to deploy billions of "nanobots": small robots the size of human blood cells that can travel inside the human bloodstream.  This notion is not as futuristic as it may sound in that there have already been successful animal experiments using this concept . There are already four major conferences on "BioMEMS" (Biological Micro Electronic Mechanical Systems) covering devices in the human blood stream. 

Consider several examples of nanobot technology, which, based on miniaturization and cost reduction trends, will be feasible within 30 years.  In addition to scanning the human brain to facilitate human brain reverse engineering, these nanobots will be able to perform a broad variety of diagnostic and therapeutic functions inside the bloodstream and human body.  Robert Freitas, for example, has designed robotic replacements for human blood cells that perform hundreds or thousands of times more effectively than their biological counterparts.  With Freitas' "respirocytes," (robotic red blood cells), you could do an Olympic sprint for 15 minutes without taking a breath.  His robotic macrophages will be far more effective than our white blood cells at combating pathogens.  His DNA repair robot would be able to repair DNA transcription errors, and even implement needed DNA changes.  Although Freitas' conceptual designs are two or three decades away, there has already been substantial progress on bloodstream-based devices.  For example, one scientist has cured type I Diabetes in rats with a nanoengineered device that incorporates pancreatic Islet cells.  The device has seven- nanometer pores that let insulin out, but block the antibodies which destroy these cells.  There are many innovative projects of this type already under way. 

Clearly, nanobot technology has profound military applications, and any expectation that such uses will be "relinquished" are highly unrealistic.  Already, DOD is developing "smart dust," which are tiny robots the size of insects or even smaller.  Although not quite nanotechnology, millions of these devices can be dropped into enemy territory to provide highly detailed surveillance.  The potential application for even smaller, nanotechnology-based devices is even greater.  Want to find Saddam Hussein or Osama bin Laden?  Need to locate hidden weapons of mass destruction?  Billions of  essentially invisible spies could monitor every square inch of enemy territory, identify every person and every weapon, and even carry out missions to destroy enemy targets.  The only way for an enemy to counteract such a force is, of course, with their own nanotechnology.  The point is that nanotechnology-based weapons will obsolete weapons of larger size. 

In addition, nanobots will also be able to expand our experiences and our capabilities.  Nanobot technology will provide fully immersive, totally convincing virtual reality in the following way.  The nanobots take up positions in close physical proximity to every interneuronal connection coming from all of our senses (e.g., eyes, ears, skin).  We already have the technology for electronic devices to communicate with neurons in both directions that requires no direct physical contact with the neurons.  For example, scientists at the Max Planck Institute have developed "neuron transistors" that can detect the firing of a nearby neuron, or alternatively, can cause a nearby neuron to fire, or suppress it from firing.  This amounts to two-way communication between neurons and the electronic-based neuron transistors.  The Institute scientists demonstrated their invention by controlling the movement of a living leech from their computer.  Again, the primary aspect of nanobot-based virtual reality that is not yet feasible is size and cost. 

When we want to experience real reality, the nanobots just stay in position (in the capillaries) and do nothing.  If we want to enter virtual reality, they suppress all of the inputs coming from the real senses, and replace them with the signals that would be appropriate for the virtual environment.  You (i.e., your brain) could decide to cause your muscles and limbs to move as you normally would, but the nanobots again intercept these interneuronal signals, suppress your real limbs from moving, and instead cause your virtual limbs to move and provide the appropriate movement and reorientation in the virtual environment. 

The Web will provide a panoply of virtual environments to explore.  Some will be recreations of real places, others will be fanciful environments that have no "real" counterpart.  Some indeed would be impossible in the physical world (perhaps, because they violate the laws of physics).  We will be able to "go" to these virtual environments by ourselves, or we will meet other people there, both real people and simulated people.  Of course, ultimately there won't be a clear distinction between the two. 

By 2030, going to a web site will mean entering a full-immersion virtual-reality environment.  In addition to encompassing all of the senses, these shared environments can include emotional overlays as the nanobots will be capable of triggering the neurological correlates of emotions, sexual pleasure, and other derivatives of our sensory experience and mental reactions.

In the same way that people today beam their lives from web cams in their bedrooms, "experience beamers" circa 2030 will beam their entire flow of sensory experiences, and if so desired, their emotions and other secondary reactions.  We'll be able to plug in (by going to the appropriate web site) and experience other people's lives as in the plot concept of 'Being John Malkovich.'  Particularly interesting experiences can be archived and relived at any time

We won't need to wait until 2030 to experience shared virtual-reality environments, at least for the visual and auditory senses.  Full-immersion visual-auditory environments will be available by the end of this decade, with images written directly onto our retinas by our eyeglasses and contact lenses.  All of the electronics for the computation, image reconstruction, and very high bandwidth wireless connection to the Internet will be embedded in our glasses and woven into our clothing, so computers as distinct objects will disappear.  

In my view, the most significant implication of the development of nanotechnology and related advanced technologies of the 21st century will be the merger of biological and nonbiological intelligence.  First, it is important to point out that well before the end of the twenty-first century, thinking on nonbiological substrates will dominate.  Biological thinking is stuck at 1026 calculations per second (for all biological human brains), and that figure will not appreciably change, even with bioengineering changes to our genome Nonbiological intelligence, on the other hand, is growing at a double-exponential rate and will vastly exceed biological intelligence well before the middle of this century.  However, in my view, this nonbiological intelligence should still be considered human as it is fully derivative of the human-machine civilization.  The merger of these two worlds of intelligence is not merely a merger of biological and nonbiological thinking mediums, but more importantly one of method and organization of thinking.

One of the key ways in which the two worlds can interact will be through  nanobots.  Nanobot technology will be able to expand our minds in virtually any imaginable way.  Our brains today are relatively fixed in design.  Although we do add patterns of interneuronal connections and neurotransmitter concentrations as a normal part of the learning process, the current overall capacity of the human brain is highly constrained, restricted to a mere hundred trillion connections.  Brain implants based on massively distributed intelligent nanobots will ultimately expand our memories a trillion fold, and otherwise vastly improve all of our sensory, pattern recognition, and cognitive abilities.  Since the nanobots are communicating with each other over a wireless local area network, they can create any set of new neural connections, can break existing connections (by suppressing neural firing), can create new hybrid biological-nonbiological networks, as well as add vast new nonbiological networks. 

Using nanobots as brain extenders is a significant improvement over the idea of surgically installed neural implants, which are beginning to be used today (e.g., ventral posterior nucleus, subthalmic nucleus, and ventral lateral thalamus neural implants to counteract Parkinson's Disease and tremors from other neurological disorders, cochlear implants, and others.) Nanobots will be introduced without surgery, essentially just by injecting or even swallowing them.  They can all be directed to leave, so the process is easily reversible.  They are programmable, in that they can provide virtual reality one minute, and a variety of brain extensions the next.  They can change their configuration, and clearly can alter their software.  Perhaps most importantly, they are massively distributed and therefore can take up billions or trillions of positions throughout the brain, whereas a surgically introduced neural implant can only be placed in one or at most a few locations. 

The Economic Imperatives of the Law of Accelerating Returns

It is the economic imperative of a competitive marketplace that is driving technology forward and fueling the law of accelerating returns.  In turn, the law of accelerating returns is transforming economic relationships. 

The primary force driving technology is economic imperative.  We are moving towards nanoscale machines, as well as more intelligent machines, as the result of a myriad of small advances, each with their own particular economic justification. 

To use one small example of many from my own experience at one of my companies (Kurzweil Applied Intelligence), whenever we came up with a slightly more intelligent version of speech recognition, the new version invariably had greater value than the earlier generation and, as a result, sales increased.  It is interesting to note that in the example of speech recognition software, the three primary surviving competitors stayed very close to each other in the intelligence of their software.  A few other companies that failed to do so (e.g., Speech Systems) went out of business.  At any point in time, we would be able to sell the version prior to the latest version for perhaps a quarter of the price of the current version.  As for versions of our technology that were two generations old, we couldn't even give those away. 

There is a vital economic imperative to create smaller and more intelligent technology Machines that can more precisely carry out their missions have enormous value.  That is why they are being built.  There are tens of thousands of projects that are advancing the various aspects of the law of accelerating returns in diverse incremental ways.  Regardless of near-term business cycles, the support for "high tech" in the business community, and in particular for software advancement, has grown enormously.  When I started my optical character recognition (OCR) and speech synthesis company (Kurzweil Computer Products, Inc.) in 1974, high-tech venture deals totaled approximately $10 million.  Even during today's high tech recession, the figure is 100 times greater.  We would have to repeal capitalism and every visage of economic competition to stop this progression.

The economy (viewed either in total or per capita) has been growing exponentially throughout this century:

Note that the underlying exponential growth in the economy is a far more powerful force than periodic recessions.  Even the "Great Depression" represents only a minor blip compared to the underlying pattern of growth.  Most importantly, recessions, including the depression, represent only temporary deviations from the underlying curve.  In each case, the economy ends up exactly where it would have been had the recession/depression never occurred. 

Productivity (economic output per worker) has also been growing exponentially.  Even these statistics are greatly understated because they do not fully reflect significant improvements in the quality and features of products and services.  It is not the case that "a car is a car;" there have been significant improvements in safety, reliability, and features.  Certainly, $1000 of computation today is immeasurably more powerful than $1000 of computation ten years ago (by a factor of more than1000).  There are a myriad of such examples.  Pharmaceutical drugs are increasingly effective.  Products ordered in five minutes on the web and delivered to your door are worth more than products that you have to fetch yourself.  Clothes custom-manufactured for your unique body scan are worth more than clothes you happen to find left on a store rack.  These sorts of improvements are true for most product categories, and none of them are reflected in the productivity statistics. 

The statistical methods underlying the productivity measurements tend to factor out gains by essentially concluding that we still only get one dollar of products and services for a dollar despite the fact that we get much more for a dollar (e.g., compare a $1,000 computer today to one ten years ago).  University of Chicago Professor Pete Klenow and University of Rochester Professor Mark Bils estimate that the value of existing goods has been increasing at 1.5% per year for the past 20 years because of qualitative improvements.  This still does not account for the introduction of entirely new products and product categories (e.g., cell phones, pagers, pocket computers).  The Bureau of Labor Statistics, which is responsible for the inflation statistics, uses a model that incorporates an estimate of quality growth at only 0.5% per year, reflecting a systematic underestimate of quality improvement and a resulting overestimate of inflation by at least 1 percent per year. 

Despite these weaknesses in the productivity statistical methods, the gains in productivity are now reaching the steep part of the exponential curve.  Labor productivity grew at 1.6% per year until 1994, then rose at 2.4% per year, and is now growing even more rapidly.  In the quarter ending July 30, 2000, labor productivity grew at 5.3%.  Manufacturing productivity grew at 4.4% annually from 1995 to 1999, durables manufacturing at 6.5% per year. 

The 1990s have seen the most powerful deflationary forces in history. This is why we are not seeing inflation.  Yes, it's true that low unemployment, high asset values, economic growth, and other such factors are inflationary, but these factors are offset by the double-exponential trends in the price-performance of all information-based technologies: computation, memory, communications, biotechnology, miniaturization, and even the overall rate of technical progress. These technologies deeply affect all industries.  We are also undergoing massive disintermediation in the channels of distribution through the Web and other new communication technologies, as well as escalating efficiencies in operations and administration. 

All of the technology trend charts above represent massive deflation.  There are many examples of the impact of these escalating efficiencies.  BP Amoco's cost for finding oil is now less than $1 per barrel, down from nearly $10 in 1991.  Processing an Internet transaction costs a bank one penny, compared to over $1 using a teller ten years ago.  A Roland Berger/Deutsche Bank study estimates a cost savings of $1200 per North American car over the next five years.  A more optimistic Morgan Stanley study estimates that Internet-based procurement will save Ford, GM, and DaimlerChrysler about $2700 per vehicle. 

It is important to point out that a key implication of nanotechnology is that it will bring the economics of software to hardware, i.e., to physical products.  Software prices are deflating even more quickly than hardware

Software Price-Performance Has Also Improved at an Exponential Rate (Example: Automatic Speech Recognition Software)

  1985 1995 2000
Price $5,000 $500 $50
Vocabulary Size (# words) 1,000 10,000 100,000
Continuous Speech? No No Yes
User Training Required (Minutes) 180 60 5
Accuracy Poor Fair Good

Current economic policy is based on outdated models that include energy prices, commodity prices, and capital investment in plant and equipment as key driving factors, but do not adequately model the size of technology, bandwidth, MIPs, megabytes, intellectual property, knowledge, and other increasingly vital (and increasingly increasing) constituents that are driving the economy.

Another indication of the law of accelerating returns in the exponential growth of human knowledge, including intellectual property.  If we look at the development of intellectual property within the nanotechnology field, we see even more rapid growth.

None of this means that cycles of recession will disappear immediately.  Indeed there is a current economic slowdown and a technology-sector recession.  The economy still has some of the underlying dynamics that historically have caused cycles of recession, specifically excessive commitments such as over-investment, excessive capital intensive projects and the overstocking of inventories.  However, the rapid dissemination of information, sophisticated forms of online procurement, and increasingly transparent markets in all industries have diminished the impact of this cycle.  So "recessions" are likely to have less direct impact on our standard of living. The underlying long-term growth rate will continue at a double exponential rate. 

Moreover, innovation and the rate of paradigm shift are not noticeably affected by the minor deviations caused by economic cycles.  All of the technologies exhibiting exponential growth shown in the above charts are continuing without losing a beat through this economic slowdown. 

The overall growth of the economy reflects completely new forms and layers of wealth and value that did not previously exist, or least that did not previously constitute a significant portion of the economy (but do now): new forms of nanoparticle-based materials, genetic information, intellectual property, communication portals, web sites, bandwidth, software, data bases, and many other new technology-based categories. 

Another implication of the law of accelerating returns is exponential growth in education and learning.  Over the past 120 years, we have increased our investment in K-12 education (per student and in constant dollars) by a factor of ten.  We have a one hundred fold increase in the number of college students.  Automation started by amplifying the power of our muscles, and in recent times has been amplifying the power of our minds.  Thus, for the past two centuries, automation has been eliminating jobs at the bottom of the skill ladder while creating new (and better paying) jobs at the top of the skill ladder.  So the ladder has been moving up, and thus we have been exponentially increasing investments in education at all levels. 

The Deeply Intertwined Promise and Peril of Nanotechnology and Related Advanced Technologies

Technology has always been a double-edged sword, bringing us longer and healthier life spans, freedom from physical and mental drudgery, and many new creative possibilities on the one hand, while introducing new and salient dangers on the other.  Technology empowers both our creative and destructive natures.  Stalin's tanks and Hitler's trains used technology.  We still live today with sufficient nuclear weapons (not all of which appear to be well accounted for) to end all mammalian life on the planet Bioengineering is in the early stages of enormous strides in reversing disease and aging processes.  However, the means and knowledge will soon exist in a routine college bioengineering lab (and already exists in more sophisticated labs) to create unfriendly pathogens more dangerous than nuclear weapons.  As technology accelerates towards the full realization of biotechnology, nanotechnology and "strong" AI (artificial intelligence at human levels and beyond), we will see the same intertwined potentials: a feast of creativity resulting from human intelligence expanded many-fold combined with many grave new dangers.  

Consider unrestrained nanobot replication.  Nanobot technology requires billions or trillions of such intelligent devices to be useful.  The most cost-effective way to scale up to such levels is through self-replication, essentially the same approach used in the biological world.  And in the same way that biological self-replication gone awry (i.e., cancer) results in biological destruction, a defect in the mechanism curtailing nanobot self-replication would endanger all physical entities, biological or otherwise. I address below steps we can take to address this grave risk, but we cannot have complete assurance in any strategy that we devise today. 

Other primary concerns include "who is controlling the nanobots?" and "who are the nanobots talking to?"  Organizations (e.g., governments, extremist groups) or just a clever individual could put trillions of undetectable nanobots in the water or food supply of an individual or of an entire population.  These "spy" nanobots could then monitor, influence, and even control our thoughts and actions.  In addition to introducing physical spy nanobots, existing nanobots could be influenced through software viruses and other software "hacking" techniques.  When there is software running in our brains, issues of privacy and security will take on a new urgency. 

My own expectation is that the creative and constructive applications of this technology will dominate, as I believe they do today.  However, I believe we need to invest more heavily in developing specific defensive technologies.  As I address further below, we are at this stage today for biotechnology, and will reach the stage where we need to directly implement defensive technologies for nanotechnology during the late teen years of this century. 

If we imagine describing the dangers that exist today to people who lived a couple of hundred years ago, they would think it mad to take such risks.  On the other hand, how many people in the year 2000 would really want to go back to the short, brutish, disease-filled, poverty-stricken, disaster-prone lives that 99 percent of the human race struggled through a couple of centuries ago?  We may romanticize the past, but up until fairly recently, most of humanity lived extremely fragile lives where one all-too-common misfortune could spell disaster.   Substantial portions of our species still live in this precarious way, which is at least one reason to continue technological progress and the economic enhancement that accompanies it. 

People often go through three stages in examining the impact of future technology: awe and wonderment at its potential to overcome age old problems; then a sense of dread at a new set of grave dangers that accompany these new technologies; followed, finally and hopefully, by the realization that the only viable and responsible path is to set a careful course that can realize the promise while managing the peril. 

This congressional hearing was party inspired by Bill Joy's cover story for Wired magazine, Why The Future Doesn't Need Us.  Bill Joy, cofounder of Sun Microsystems and principal developer of the Java programming language, has recently taken up a personal mission to warn us of the impending dangers from the emergence of self-replicating technologies in the fields of genetics, nanotechnology, and robotics, which he aggregates under the label "GNR."  Although his warnings are not entirely new, they have attracted considerable attention because of Joy's credibility as one of our leading technologists.  It is reminiscent of the attention that George Soros, the currency arbitrager and arch capitalist, received when he made vaguely critical comments about the excesses of unrestrained capitalism .

Joy's concerns include genetically altered designer pathogens, followed by self-replicating entities created through nanotechnology. And  if we manage to survive these first two perils, we will encounter robots whose intelligence will rival and ultimately exceed our own. Such robots may make great assistants, but who's to say that we can count on them to remain reliably friendly to mere humans?

Although I am often cast as the technology optimist who counters Joy's pessimism, I do share his concerns regarding self-replicating technologies; indeed, I played a role in bringing these dangers to Bill's attention. In many of the dialogues and forums in which I have participated on this subject, I end up defending Joy's position with regard to the feasibility of these technologies and scenarios when they come under attack by commentators who I believe are being quite shortsighted in their skepticism. Even so, I do find fault with Joy's prescription: halting the advance of technology and the pursuit of knowledge in broad fields such as nanotechnology.

In his essay, Bill Joy eloquently described the plagues of centuries past and how new self-replicating technologies, such as mutant bioengineered pathogens and "nanobots" run amok, may bring back long-forgotten pestilence.  Indeed these are real dangers.  It is also the case, which Joy acknowledges, that it has been technological advances, such as antibiotics and improved sanitation, which have freed us from the prevalence of such plagues.  Suffering in the world continues and demands our steadfast attention.  Should we tell the millions of people afflicted with cancer and other devastating conditions that we are canceling the development of all bioengineered treatments because there is a risk that these same technologies may someday be used for malevolent purposes?  Having asked the rhetorical question, I realize that there is a movement to do exactly that, but I think most people would agree that such broad-based relinquishment is not the answer. 

The continued opportunity to alleviate human distress is one important motivation for continuing technological advancement.  Also compelling are the already apparent economic gains I discussed above that will continue to hasten in the decades ahead.  The continued acceleration of many intertwined technologies are roads paved with gold (I use the plural here because technology is clearly not a single path).  In a competitive environment, it is an economic imperative to go down these roads.  Relinquishing technological advancement would be economic suicide for individuals, companies, and nations. 

The Relinquishment Issue

This brings us to the issue of relinquishment, which is Bill Joy's most controversial recommendation and personal commitment.   I do feel that relinquishment at the right level is part of a responsible and constructive response to these genuine perils.  The issue, however, is exactly this: at what level are we to relinquish technology

Ted Kaczynski would have us renounce all of it.  This, in my view, is neither desirable nor feasible, and the futility of such a position is only underscored by the senselessness of Kaczynski's deplorable tactics.  There are other voices, less reckless than Kaczynski, who are nonetheless arguing for broad-based relinquishment of technology.  Bill McKibben, the environmentalist who was one of the first to warn against global warming, takes the position that "environmentalists must now grapple squarely with the idea of a world that has enough wealth and enough technological capability, and should not pursue more."  In my view, this position ignores the extensive suffering that remains in the human world, which we will be in a position to alleviate through continued technological progress

Another level would be to forego certain fields -- nanotechnology, for example -- that might be regarded as too dangerous.  But such sweeping strokes of relinquishment are equally untenable.  As I pointed out above, nanotechnology is simply the inevitable end result of the persistent trend towards miniaturization that pervades all of technology.  It is far from a single centralized effort, but is being pursued by a myriad of projects with many diverse goals.  

One observer wrote:

"A further reason why industrial society cannot be reformed. . . is that modern technology is a unified system in which all parts are dependent on one another.  You can't get rid of the "bad" parts of technology and retain only the "good" parts.  Take modern medicine, for example.  Progress in medical science depends on progress in chemistry, physics, biology, computer science and other fields.  Advanced medical treatments require expensive, high-tech equipment that can be made available only by a technologically progressive, economically rich society.  Clearly you can't have much progress in medicine without the whole technological system and everything that goes with it."

The observer I am quoting is, again, Ted Kaczynski.  Although one will properly resist Kaczynski as an authority, I believe he is correct on the deeply entangled nature of the benefits and risks.  However, Kaczynski and I clearly part company on our overall assessment on the relative balance between the two.  Bill Joy and I have dialogued on this issue both publicly and privately, and we both believe that technology will and should progress, and that we need to be actively concerned with the dark side.  If Bill and I disagree, it's on the granularity of relinquishment that is both feasible and desirable. 

Abandonment of broad areas of technology will only push them underground where development would continue unimpeded by ethics and regulation.  In such a situation, it would be the less-stable, less-responsible practitioners (e.g., terrorists) who would have all the expertise.   

I do think that relinquishment at the right level needs to be part of our ethical response to the dangers of 21st century technologies.  One constructive example of this is the proposed ethical guideline by the Foresight Institute, founded by nanotechnology pioneer Eric Drexler, that nanotechnologists agree to relinquish the development of physical entities that can self-replicate in a natural environment.  Another is a ban on self-replicating physical entities that contain their own codes for self-replication.  In what nanotechnologist Ralph Merkle calls the "broadcast architecture," such entities would have to obtain such codes from a centralized secure server, which would guard against undesirable replication.  I discuss these guidelines further below. 

The broadcast architecture is impossible in the biological world, which represents at least one way in which nanotechnology can be made safer than biotechnology.  In other ways, nanotech is potentially more dangerous because nanobots can be physically stronger than protein-based entities and more intelligent.  It will eventually be possible to combine the two by having nanotechnology provide the codes within biological entities (replacing DNA), in which case biological entities can use the much safer broadcast architecture.  I comment further on the strengths and weaknesses of the broadcast architecture below. 

As responsible technologies, our ethics should include such "fine-grained" relinquishment, among other professional ethical guidelines.  Other protections will need to include oversight by regulatory bodies, the development of technology-specific "immune" responses, as well as computer assisted surveillance by law enforcement organizations.  Many people are not aware that our intelligence agencies already use advanced technologies such as automated word spotting to monitor a substantial flow of telephone conversations.  As we go forward, balancing our cherished rights of privacy with our need to be protected from the malicious use of powerful 21st century technologies will be one of many profound challenges.  This is one reason that such issues as an encryption "trap door" (in which law enforcement authorities would have access to otherwise secure information) and the FBI "Carnivore" email-snooping system have been controversial, although these controversies have abated since 9-11-2001. 

As a test case, we can take a small measure of comfort from how we have dealt with one recent technological challenge.  There exists today a new form of fully nonbiological self replicating entity that didn't exist just a few decades ago: the computer virus.  When this form of destructive intruder first appeared, strong concerns were voiced that as they became more sophisticated, software pathogens had the potential to destroy the computer network medium they live in.  Yet the "immune system" that has evolved in response to this challenge has been largely effective.  Although destructive self-replicating software entities do cause damage from time to time, the injury is but a small fraction of the benefit we receive from the computers and communication links that harbor them.  No one would suggest we do away with computers, local area networks, and the Internet because of software viruses. 

One might counter that computer viruses do not have the lethal potential of biological viruses or of destructive nanotechnology.  This is not always the case; we rely on software to monitor patients in critical care units, to fly and land airplanes, to guide intelligent weapons in our current campaign in Iraq, and other "mission-critical" tasks.  To the extent that this is true, however, this observation only strengthens my argument.  The fact that computer viruses are not usually deadly to humans only means that more people are willing to create and release them.  It also means that our response to the danger is that much less intense.  Conversely, when it comes to self-replicating entities that are potentially lethal on a large scale, our response on all levels will be vastly more serious, as we have seen since 9-11. 

I would describe our response to software pathogens as effective and successful.  Although they remain (and always will remain) a concern, the danger remains at a nuisance level.  Keep in mind that this success is in an industry in which there is no regulation, and no certification for practitioners.  This largely unregulated industry is also enormously productive.  One could argue that it has contributed more to our technological and economic progress than any other enterprise in human history.   I discuss the issue of regulation further below.

Development of Defensive Technologies and the Impact of Regulation

Joy's treatise is effective because he paints a picture of future dangers as if they were released on today's unprepared world.  The reality is that the sophistication and power of our defensive technologies and knowledge will grow along with the dangers.  When we have "gray goo" (unrestrained nanobot replication), we will also have "blue goo" ("police" nanobots that combat the "bad" nanobots).  The story of the 21st century has not yet been written, so we cannot say with assurance that we will successfully avoid all misuse.  But the surest way to prevent the development of the defensive technologies would be to relinquish the pursuit of knowledge in broad areas.  We have been able to largely control harmful software virus replication because the requisite knowledge is widely available to responsible practitioners.  Attempts to restrict this knowledge would have created a far less stable situation.  Responses to new challenges would have been far slower, and it is likely that the balance would have shifted towards the more destructive applications (e.g., software viruses). 

The challenge most immediately in front of us is not self-replicating nanotechnology, but rather self-replicating biotechnology.  The next two decades will be the golden age of biotechnology, whereas the comparable era for nanotechnology will follow in the 2020s and beyond.  We are now in the early stages of a transforming technology based on the intersection of biology and information science.  We are learning the "software" methods of life and disease processes.  By reprogramming the information processes that lead to and encourage disease and aging, we will have the ability to overcome these afflictions.  However, the same knowledge can also empower a terrorist to create a bioengineered pathogen

As we compare the success we have had in controlling engineered software viruses to the coming challenge of controlling engineered biological viruses, we are struck with one salient difference.  As I noted above, the software industry is almost completely unregulated.  The same is obviously not the case for biotechnologyA bioterrorist does not need to put his "innovations" through the FDA.  However, we do require the scientists developing the defensive technologies to follow the existing regulations, which slow down the innovation process at every step.  Moreover, it is impossible, under existing regulations and ethical standards, to test defenses to bioterrorist agents.  There is already extensive discussion to modify these regulations to allow for animal models and simulations to replace infeasible human trials.  This will be necessary, but I believe we will need to go beyond these steps to accelerate the development of vitally needed defensive technologies. 

For reasons I have articulated above, stopping these technologies is not feasible, and pursuit of such broad forms of relinquishment will only distract us from the vital task in front of us.  In terms of public policy, the task at hand is to rapidly develop the defensive steps needed, which include ethical standards, legal standards, and defensive technologies.  It is quite clearly a race.  As I noted, in the software field, the defensive technologies have remained a step ahead of the offensive ones.  With the extensive regulation in the medical field slowing down innovation at each stage, we cannot have the same confidence with regard to the abuse of biotechnology

In the current environment, when one person dies in gene therapy trials, there are congressional investigations and all gene therapy research comes to a temporary halt.  There is a legitimate need to make biomedical research as safe as possible, but our balancing of risks is completely off.  The millions of people who desperately need the advances to be made available by gene therapy and other breakthrough biotechnology advances appear to carry little political weight against a handful of well-publicized casualties from the inevitable risks of progress.

This equation will become even more stark when we consider the emerging dangers of bioengineered pathogens.  What is needed is a change in public attitude in terms of tolerance for needed risk. 

Hastening defensive technologies is absolutely vital to our security.  We need to streamline regulatory procedures to achieve this.  However, we also need to greatly increase our investment explicitly in the defensive technologies.  In the biotechnology field, this means the rapid development of antiviral medications.  We will not have time to develop specific countermeasures for each new challenge that comes along.  We are close to developing more generalized antiviral technologies, and these need to be accelerated.

I have addressed here the issue of biotechnology because that is the threshold and challenge that we now face.  The comparable situation will exist for nanotechnology once replication of nano-engineered entities has been achieved.  As that threshold comes closer, we will then need to invest specifically in the development of defensive technologies, including the creation of a nanotechnology-based immune system Bill Joy and other observers have pointed out that such an immune system would itself be a danger because of the potential of "autoimmune" reactions (i.e., the immune system using its powers to attack the world it is supposed to be defending). 

However, this observation is not a compelling reason to avoid the creation of an immune system.  No one would argue that humans would be better off without an immune system because of the possibility of auto immune diseases.  Although the immune system can itself be a danger, humans would not last more than a few weeks (barring extraordinary efforts at isolation) without one.  The development of a technological immune system for nanotechnology will happen even without explicit efforts to create one.  We have effectively done this with regard to software viruses.  We created a software virus immune system not through a formal grand design project, but rather through our incremental responses to each new challenge.  We can expect the same thing will happen as challenges from nanotechnology based dangers emerge.  The point for public policy will be to specifically invest in these defensive technologies. 

It is premature today to develop specific defensive nanotechnologies since we can only have a general idea of what we are trying to defend against.  It would be similar to the engineering world creating defenses against software viruses before the first one had been created.  However, there is already fruitful dialogue and discussion on anticipating this issue, and significantly expanded investment in these efforts is to be encouraged. 

As I mentioned above, the Foresight Institute, for example, has devised a set of ethical standards and strategies for assuring the development of safe nanotechnology.  These guidelines include:

  • "Artificial replicators must not be capable of replication in a natural, uncontrolled environment."
  • "Evolution within the context of a self-replicating manufacturing system is discouraged."
  • "MNT (molecular nanotechnology) designs should specifically limit proliferation and provide traceability of any replicating systems."
  • "Distribution of molecular manufacturing development capability should be restricted whenever possible, to responsible actors that have agreed to the guidelines.  No such restriction need apply to end products of the development process."

Other strategies that the Foresight Institute has proposed include:

  • Replication should require materials not found in the natural environment. 
  • Manufacturing (replication) should be separated from the functionality of end products.  Manufacturing devices can create end products, but cannot replicate themselves, and end products should have no replication capabilities.
  • Replication should require replication codes that are encrypted, and time limited.  The broadcast architecture mentioned earlier is an example of this recommendation. 

These guidelines and strategies are likely to be effective with regarding to preventing accidental release of dangerous self-replicating nanotechnology entities.  The situation with regard to intentional design and release of such entities is more complex and more challenging.  We can anticipate approaches that would have the potential to defeat each of these layers of protections by a sufficiently determined and destructive opponent. 

Take, for example, the broadcast architecture.  When properly designed, each entity is unable to replicate without first obtaining replication codes.  These codes are not passed on from one replication generation to the next.  However, a modification to such a design could bypass the destruction of the replication codes and thereby pass them on to the next generation.  To overcome that possibility, it has been recommended that the memory for the replication codes be limited to only a subset of the full replication code so that there is insufficient memory to pass the codes along.  However, this guideline could be defeated by expanding the size of the replication code memory to incorporate the entire code.  Another protection that has been suggested is to encrypt the codes and to build in protections such as time expiration limitations in the decryption systems.  However, we can see the ease with which protections against unauthorized replications of intellectual property such as music files has been defeated.  Once replication codes and protective layers are stripped away, the information can be replicated without these restrictions. 

My point is not that protection is impossible.  Rather, we need to realize that any level of protection will only work to a certain level of sophistication.  The "meta" lesson here is that we will need to continue to advance the defensive technologies, and keep them one or more steps ahead of the destructive technologies.  We have seen analogies to this in many areas, including technologies for national defense, as well as our largely successful efforts to combat software viruses, that I alluded to above. 

What we can do today with regard to the critical challenge of self-replication in nanotechnology is to continue the type of effective study that the Foresight Institute has initiated.  With the human genome project, three to five percent of the budgets were devoted to the ethical, legal, and social implications (ELSI) of the technology.  A similar commitment for nanotechnology would be appropriate and constructive. 

Technology will remain a double-edged sword, and the story of the 21st century has not yet been written.  It represents vast power to be used for all humankind's purposes.  We have no choice but to work hard to apply these quickening technologies to advance our human values, despite what often appears to be a lack of consensus on what those values should be.