Snowcrashing Into The Diamond Age 2 (Part One) by Extropia DaSilva

Extropia DaSilva

Extropia DaSilva is back with another one of her fantastic essays. Enjoy! — Gwyn

In part one of this essay, we examined that most infamous of dystopian nanotech outcomes, the ‘grey goo’ of self-replicating machines. In this second part the view shall be widened as we examine how molecular manufacturing might affect society as a whole. I am obviously not the first person to attempt such a thing. In fact, ever since Drexler established the field with his books ‘Engines Of Creation’, ‘Unbounding The Future’ and ‘Nanosystems’, there have been no end of speculations regarding how society will adapt to this paradigm shift in engineering. Some of these speculations are decidedly dystopian, others defiantly utopian but if there’s anything their authors share in common it’s the fact that none of them have had first-hand experience of a society built on widespread access to molecular manufacturing. This is simply because the technology is still very much in the theoretical stage of development and no practical nanosystems currently exist.

However, I would argue that there does exist a society built around a manufacturing system that shares  certain similarities with molecular nanotechnology. There are no prizes for guessing that I am referring to Second Life. I believe that SL can serve two useful purposes as molecular nanotechnology emerges from vapourware. The first is that we can take those aforementioned speculations and see if they have come true in this prototype nanotech society. The second is that, as the metaverse develops, it may be possible to guide its evolution so that it represents a bridge that helps us cross over to a nanotech society as painlessly as possible.

Weaknesses in My Argument

Before we really get stuck in, there are some issues to clear out of the way. The first is to acknowledge that nanotechnology’s effect on society will be so far-reaching it would not be possible to fully examine every aspect of it. Omissions have to be made. On the dystopian side, I will not be discussing how war will be conducted in a world with widespread molecular nanotechnology, or whether its development decreases or increases the likelihood of conflict. Nor will I be discussing the possible negative environmental consequences of materials built at the nanometer scale. On the utopian side, I shall have nothing to say regarding the augmentation of human beings. There will be no discussions of ageless bodies or uploaded minds here.

Another point that needs to be addressed is the fact that building with prims is not exactly equivalent to molecular manufacturing. The main difference is that SL residents can violate the laws of physics. This is most obviously demonstrated by those floating homes that openly defy the law of gravity. But on a more subtle level, builders in SL use ‘material’ that does not possess physical properties like load bearing, wear, and material fatigue. Because of this, they are able to build structures that, even if they were built out of diamondoid material (which has 55 times the ratio of tensile strength to density of steel) would inevitably collapse if built in RL. Another weakness of using SL as a model of a nanotech society is that the various scenarios are based on the assumption that molecular manufacturing has replaced the current industrial system. Clearly, SL has not done so, but instead is more like a microcosm of a much larger society. Having said that, it must be the case that molecular manufacturing will be developed within the current industrial system, so I don’t consider this difference to be too damaging to the premise that SL is a prototype nanotech society.

Molecular Manufacturing Contrats With Conventional Manufacturing But Compares To Prim Building

Right, that’s the flaws in the ‘prim-building is comparable to molecular manufacturing’ argument out of the way, now let’s examine how they do compare and why Drexler’s technology contrasts with the current manufacturing system. The main reason why conventional manufacturing is unlike molecular nanotechnology is because it approaches the task of creating useful products from a completely different direction. With the current approach, a purpose-suited device is distilled or carved from a mass of raw materials. Conventional manufacturing begins with large and unformed parts, and this fact has lead to a trend towards larger, more centralised factories. The Industrial Revolution also set in motion a trend known as ‘division of labour’, which refers to economies of scale that result from having a particular task performed by fewer groups, or fewer companies, or in fewer places. Specialisation leads to better products for less cost, because it makes use of workers who understand their job better than a generalist could, and because it eliminates redundant factories by consolidating many tasks into only a few.

An outcome of these trends is that factories became equipped with highly-specialised tools that can only deal with a very limited range of material. A sawmill is great for turning lumber into planks, fences, wooden pegs and so-forth but is completely useless if you want to churn out computer chips. A factory built for manufacturing cars is similarly ill-equipped to manufacture anything other than automobiles.

Current manufacturing methods are ‘subtractive’. In direct contrast to this, molecular manufacturing is ‘additive’. It takes a bottom-up approach to engineering by assembling the building blocks of matter into useful products, following a design that calls for only what’s needed. Defining what is meant by ‘feedstock’ is rather difficult with conventional manufacturing, because at the scale at which current systems manipulate matter, material comes in such a wide range of forms. But at the scale at which molecular manufacturing works, there are, at most, 92 different building blocks (the elements of the periodic table). What’s more, almost everything in the material world uses fewer than 20 of these elements.

The process detailed in Drexler’s ‘Nanosystems’ makes use of exponential manufacturing, in which integrated systems contain numerous subsystems attached to a framework. The process would begin with a flow of whatever elements are required (typical products require large quantities of carbon, moderate quantities of hydrogen, oxygen, nitrogen, phosphorus, chlorine, fluorine, sulphur, silicon and lesser quantities of other elements). Molecular mills (mechanisms capable of selectively binding and transporting chemical species from a feedstock) would combine molecules into a diverse set of building blocks in the 10^-7 to 10^-6m  range. Block assemblers would assemble components, component assemblers would piece together subsystems and systems assemblers would manufacture the finished product. As far as a household with a desktop nanofactory is concerned, the basic building blocks are likely to be those nanoscale Legos bricks. At this point a comparison with prim-building should be obvious. In the physical world of desktop nanosystems, macro scale products would be assembled from the bottom up by combining a diverse set of nanoblocks. In SL, builders take basic building-blocks known as ‘prims’ and reshape and combine them into complex products (the building blocks used by nanosystems could incorporate struts and joints that contain sliding interfaces, thereby allowing them to be extended or twisted to assume a wide range of lengths or angles, Moreover, the blocks could be assembled into as many objects as can be derived by reshaping prims. In that sense, nanoblocks may also be reshaped as needed).

The building tools in SL allow content creators to make pretty-much anything. Whether it be jewellery, clothes, furniture, houses or cars, it is all constructed from the same elementary building-blocks. Admittedly, designers may employ additional CAD tools like Photoshop, but even so prim-building is far more flexible than the highly specialised tools used in RL engineering today. Plastic-moulding machines and metal-cutting machines shape particular kinds of plastic and metal respectively; they do not possess the flexibility that would come from having tiny, fast-cycling parts that form complex patterns of the elementary building-blocks of matter. Nanosystems, though, would have precisely that capability.

It was mentioned earlier that prims are the elementary building blocks of all products built in SL. Of course, this is a virtual world and in reality everything is built out of the building blocks of information, which are binary digits. Linden Lab’s prototype metaverse exists inside computers, which are machines that contain tiny, fast cycling parts that can be directed to form complex patterns of bits. Computers are extremely capable of processing bits and just as adept at copying information. This gives SL’s builders certain advantages that would not be possible using conventional manufacturing. Consider the way Fallingwater Celladoor goes about her job. “Sales vary a lot: Some things hit big, others don’t. I just make what I like and see what happens”. In other words, she makes use of rapid prototyping and deployment. In the real world, rapid prototyping does not exist in any meaningful way. So-called ‘rapid prototyping machines’ are very costly, take substantial time to manufacture something, and that something can only be a passive component, not an integrated product. Assembly is still required. As for high-volume manufacturing, overheads to be considered include procurement of supplies, training workers and the product must not only be useful but manufacturable as well. A significant part of the total design cost may be taken up by designing that manufacturing process.

A builder in SL does not need to worry about such things. There is no need to design the manufacturing process because the content-creation tools are already in place.  There is no need to worry about procurement of supplies because prims are a readily-available resource. There is no need to train workers to run the manufacturing process because it’s carried out automatically by the power of computers. What’s more, while it requires time and effort to design and build a product in SL, this only applies to the first of anything. But once it’s done, once you have created your prim-based wonder, it requires zero effort from you to mass-produce them. A person turns up at your store, chooses whatever, and the information embodying its design is copied and a perfect reproduction is duly delivered to the customer’s inventory. If the item has been tagged as copyable, the customer can effortlessly give away the item to anyone without diminishing their own supply.

You may have noticed  that describing the inner workings of computers as ‘fast cycling parts that can be directed to form complex patterns of information’ bares similarities to desktop nanosystems, which contain fast-cycling parts that can be directed to form complex patterns of the building blocks of matter. You would expect to find nanosystems offering a similar set of advantages, and this is indeed the case. Procurement of supplies is no problem, since all the process requires is a mixture of simple compounds (Carbon, the main ingredient, currently costs $0.1/Kg). The nanofactory would then convert that feedstock into the finished product, and the intermediate stages would not require external handling or transport — no need to train workers to run these factories! It would take about an hour to produce a functional prototype at a cost of a few dollars per-Kg, regardless of the complexity of the product. The approved design could immediately be put into production.

Most of the internal volume of a desktop nanosystem is devoted to open workspaces for manipulators. According to Drexler, ‘it should be feasible to design a system that can be unfolded from linear dimensions of ~0.2 m to linear dimensions of <0.4 m…with the use of programmable manipulators to build a diverse set of structures from a smaller set of building blocks, the output of a set of specialized mills can be used to build an identical set of mills, as well as many other structures’. In other words, a desktop nanosystem can build an identical desktop nanosystem. Strictly speaking, it’s also possible to build a duplicate of a conventional factory.  We had the technology to build one of them, we could obtain the supplies to build its twin. But at this scale, it would take more than a year for a system to produce outputs with a complexity equalling its own. A 1kg desktop nanosystem, however, would manufacture an identical system in about an hour.

More than anything else, it is this capability that sets molecular nanotechnology apart from our current system. In principle, a society with access to nanosystems would be different from all previous economies, because the means of production themselves are replicable from cheap, readily available elements. The only society based on a comparable system is Second Life, with its cheap, readily-available prims that can be assembled into complex products and thereafter effortlessly duplicated. Drexler was unflinching in his appraisal of the expected consequences of molecular-manufacturing: ‘The industrial system won’t be fixed, it will be junked… the wholesale replacement of 20th Century technologies’.

But then what? Replace the engines — the entire system of the industrial revolution — and what becomes of the society it supports? What about employment, money, social status? Would current notions of work and leisure still apply and if not would they be replaced by something better, or worse?

The Need For Molecular Manufacturing

As I said earlier, one can find all manner of speculations about how society will adjust to this paradigm shift in manufacturing. Some of these commentators expect a kind of enlightenment to follow widespread access to nanosystems: ‘People will not need to work to make a living. We will be living in a world where true equality exists’. Others see things entirely differently. Damon Knight’s story, ‘A For Anything’, is a dystopian tale in which America breaks down into lawlessness, and innovation comes to be seen as intolerably disruptive, following the proliferation of ‘gizmos’ that provide for all material needs.

And then there are those who question whether there is an incentive to develop the technology in the first place. In ‘The Spike’, Damien Broderick asked, ‘why should canny investors choose to move their money into the nano field if they can see… general assembler machines that literally compile material objects, including more of themselves? Where’s the profit in that?’ Another reporter witnessed at first hand the angry response of shop owners at the introduction of a device that could copy their wares: ‘One resident created a massive boulder, instantiated it next to the vendor, and as it grew a hundred metres in diameter, flung cat, protesters, and embedded journalist in every direction’.

Well done if you successfully spotted that the last quote (from a blog entry by Hamlet Au) concerned the ‘copybot versus SL seller’s guild’ controversy. LibSL’s ‘copybot’ caused a certain amount of panic upon its release in SL, as it was feared the economic system would collapse as everyone ran around making illegal copies of items they would otherwise have to pay for. In the end the Lindens stepped in and declared that using copybot or some similar tool was a violation of TOS, punishable by exile from SL. The whole controversy seemed to fade away almost as quickly as it arose. Does this episode suggest that we should expect to see practical molecular assemblers similarly banned?

I asked such a question at a Thinkers discussion, and Nite Zelmanov was quite certain that copybot had not been banned. ‘Copybot is NOT banned. What it does is 100% allowed. Doing that to things you’ve been told not to is a TOS violation’. When I asked if there were legitimate reasons for using copybot, Zelmanov answered, ‘thousands of legitimate uses, (for instance) copybot is currently the only way to backup your prims/texture-based creations outside of SL. Many people use it for that today’. Similarly, would there be an advantage in pursuing molecular manufacturing, even if perfecting this technology would threaten the economic system as we currently understand it?

The answer is that pursuing molecular nanotechnology is more than a luxury we can choose to opt out of, it is a non-negotiable condition for survival. This has been known for over 200 years, following the publication of ‘Essay On The Principles Of Population’ by Thomas Malthus. Malthus noted that growing populations tend to expand exponentially, but the food supply can only increase by a fixed amount per-year and so exhibits linear growth. Any rate of exponential growth must eventually outstrip linear growth, which means unchecked population growth must outrun food production. At which point, of course, starvation and death ensue.

Sounds grim, but this essay was published in 1798 and more than two centuries later food production is still keeping up with population growth. This tends to make his (and similar arguments, like Paul Ehlrich’s ‘The Population Bomb’, which in 1968 gloomily declared ‘the battle to feed humanity is over… hundreds of millions are going to starve to death’) come across as needless doom-mongering. It is not. Predicting that exponential growth in population size WILL outrun resources is mathematically undeniable. Predicting WHEN limits will pinch is rather more difficult. Malthus failed to anticipate breakthroughs in farm equipment, crop genetics and fertiliser. Similarly, the so-called ‘Green Revolution’ averted the catastrophe that Ehlrich foresaw thanks to new generations of high-yield crops and the industrialisation of agriculture.

Technology has delayed the Malthusian catastrophe, but is itself dependent on the Earth’s natural resources. It would be more appropriate to say the Green Revolution diverted the problem rather than averted it. The US food system consumes ten times more energy than it produces in food. Fossil fuels make this disparity possible, but they are in finite supply. It also requires copious amounts of fertiliser. Worldwide, more nitrogen fertiliser is used per year than can be supplied through natural resources. Finally, it is dependent on water. We can grow twice as much food as we could a generation ago but require three times as much water to do so. Fresh water is being lost at a rate of 6% per year, and in the last 3 decades we consumed a third of the world’s natural resources.

In 4 decades time, the Earth will be able to provide a maximum of 3.5 acres of land per person. Environmentalists talk about ecological footprints in order to give an idea of how much we consume natural resources. A person with a 4-acre footprint requires 4-acres worth of resources to maintain their lifestyle. Currently, the Earth can provide 5.3 acres of land per person and so an ecological footprint of 4 acres would be sustainable. Unfortunately, developed nations are not living within their means. The ecological footprint of the average American is 24 acres worth of land. The average Brit uses 11 acres worth of land. These and other countries are said to have an ecological deficit, because the number of acres that exist in that country are not sufficient to support the lifestyles of the populace. This fact has lead to the conclusion that the Malthusian catastrophe was postponed, not averted.

Now that nearly all the productive land on this planet is being exploited by agriculture, the only way to keep delaying the Malthusian Catastrophe is to learn how to manage what we have more efficiently. In practical terms, this largely involves developing ways of exercising finer and finer control over matter and obviously molecular nanotechnology is the endpoint of such ongoing efforts. We are discovering, as our control over matter heads towards the molecular level, that what we took to be fundamental problems were in fact temporary failings of inferior technology. I suspect most people believe industry is inextricably linked with sewage, waste and pollution. But all of these result from inadequate control over how matter is handled. Toxic wastes consist, generally speaking, of harmless atoms arranged into noxious chemicals; much the same can be said of sewage. Such waste could be converted into harmless forms once we have the tools to work with matter at the molecular level. With much greater control over the handling of matter, waste would no longer be disposed of by dumping it into landfills, rivers and the air. Products we no longer need would be disassembled into simple molecules, ready for near-total recycling. There may still be some waste in the form of leftover atoms, but these would most likely be ordinary minerals and simple gases.

Some forms of material (elements like lead, mercury and cadmium) are intrinsically toxic. Such elements need play no role in molecular manufacturing processes or products, though they may be introduced into the system via a bad mix of raw material. Since molecular manufacturing cannot create elements (only combine them into complex structures) the process cannot  be blamed if such elements come out. If toxic elements resulting from a bad mix of raw material do come out, chemically bonding them into a stable mineral and putting them back where they came from would be the best method of disposal.

The pollution that’s causing the most concern right now are greenhouse gases. Earth would be too cold to support life forms such as humans, were it not for gases in the atmosphere like carbon dioxide that trap some of the heat from the Sun as it is radiated back from the surface of the Earth. That our planet has long had the right amount of greenhouses gases in the atmosphere to make it hospitable is no coincidence. In fact, the geosphere and biosphere interact in complex ways that control the levels of carbon dioxide, sort of like the way a thermostat regulates the temperature in a room. Or, rather, it did. But after the Industrial Revolution, industry powered by the burning of fossil fuels artificially pumped billions of metric tons of carbon dioxide into the atmosphere, which is far more than those self-regulating systems can cope with. As they break down, the Earth’s climate may change in ways that are detrimental to us.

Time for more scary statistics. As developing countries rise from poverty to prosperity, it is expected that CO2 emissions will rise sharply. Singapore’s development saw its emissions rise from 1 metric ton of CO2 per person to 22 metric tons in three decades. India’s economic development is expected to increase its emissions from 1.1 metric tons per person, to 12 metric tons. It seems that rising prosperity goes hand in hand with worsening air quality and declining climate stability.

But, such predictions stem from assumptions that greater wealth means greater resource consumption, that the burning of fossil fuels, deforestation and scarring the Earth to mine for minerals will continue, and that pollution is the inevitable consequence of industry. Drexler pointed out that all of these assumptions are dependent on the belief that industry as we know it cannot be replaced. The successful development of molecular nanosystems would refute this assumption, and turn what’s now considered to be a harmful pollutant into a useful resource. After all, carbon is the main building material in molecular manufacturing, and 20th century industries have pumped enough into the atmosphere to provide 31,000 kilos for every person alive today. Using almost nothing but the waste from 20th century industry, a civilization built on nanosystems could support a population of 10 billion to a high standard of living, using just 3% of present US farm acreage to do so.

Of course, this will require energy as well as matter. Molecular nanotechnology can help in two ways. Firstly, it would greatly reduce the energy requirements needed in manufacturing. Reductions in energy requirements are also something we see in SL. In ‘Second Lives’, the author Tim Guest noted that a flight across the Atlantic to interview Philip Rosedale  produced the same amount of carbon as running a small family car non-stop for two years. But an inworld interview between their avatars produced a carbon footprint ‘equivalent to keeping the fridge door open for five minutes’. The same author noted that 16 acres of SL real estate consumes 280 kilowatt hours of electricity per year, whereas a thousand-square-foot retail outlet  in RL consumes as much energy in a week.

The strongest materials that can be produced in bulk today achieve a mere 5 percent of theoretical molecular strengths. ‘Things break down’ seems to be a fundamental law, but a lot of atoms need to be out of place for failures to occur, and so the occurrence of failures can be attributed to the fact that we currently handle matter in a very crude fashion. The more precisely a product is manufactured, the less likely it is to contain the atomic defects, impurities, dislocations, grain boundaries and microcracks from which malfunctions occur. Molecular manufacturing would develop materials that are many times stronger, and a great deal lighter, than steel. Products would be assembled with atomic precision. They would be tough and reliable, making malfunctions virtually non-existent.

The second way molecular nanotechnology can help is by making solar power a viable source of energy. Currently, solar power costs about $2.75 per watt, but it has been determined that nanotechnology will lower the price by a factor of ten to one hundred. Molecular manufacturing would not only make them cheap, but tough enough to replace asphalt as the choice material for surfacing roads with. Drexler also explained that ‘Molecular manufacturing… could make them tiny enough to be incorporated into the building blocks of smart paint. Once the paint was applied, its building blocks would plug together to pool their electrical power and deliver it through some standard plug’. The massive improvements in energy demands made possible by the far lighter and durable products of molecular nanotechnology implies that a global industry built on nanosystems would require 30 trillion watts, which could be obtained by capturing a mere three ten thousandths of the solar energy striking the Earth.

Pseudorefutations

At this point, it would be worth debunking a few arguments that critics are always assuming are devastating to the concept. Some people believe that oil companies will use their financial might and political influence to block the development of any technology that could replace fossil fuels as the lifeblood of industry. In all fairness, there is evidence to support such a view. For instance, when municipalities in California decided to purchase cleaner, natural gas buses, the diesel industry sued to block the switch. It’s worth remembering, though, that fossil fuels have a limited lifespan as an energy source. They become unviable, not when the Earth’s stockpiles are completely exhausted, but when the energy required to extract a barrel of oil exceeds the energy that can be obtained from a barrel of oil. Oil discovery and development costs tripled from 1999 to 2006, which lead J. Robinson West (chairman of oil industry consulting firm PEC energy) to comment, ‘there are no easy barrels left’. Talking about the fundamental limitations that will eventually block further improvements in integrated circuits, Hans Moravec said, ‘as long as conventional approaches continue to be improved, the radical alternatives don’t have a competitive chance. But as soon as progress in conventional techniques falters, radical alternatives jump ahead’. I suppose it’s possible that oil barons will block development of alternative energy right up until the moment fossil fuels have utterly exhausted their energy potential, causing the total collapse of civilization and propelling us back to the stone-age. I think it is rather more likely that the paradigm shift Moravec spoke of will apply to radical alternatives in energy.

Another poor criticism is one I like to call the ‘non-existent flying car’ argument, whereby a critic points out that futurists predicted we would have flying cars by the year 2000 (or some other device) that, several years after the millennium, have failed to materialise. They then confidently assert that predictions concerning molecular manufacturing are just as dubious.  The massive flaw in such arguments is that they fail to recognise that molecular nanotechnology is not a single device (like a flying car) but rather the endpoint of a trend that pervades all of technology and nearly all scientific progress. That trend is the pursuit of the precise control of matter. Scientists in the field of chemistry are always striving to synthesize more complex chemicals. This requires the development of instruments that can be used to prod, measure and modify molecules, helping chemists to study their structure, behaviours and interactions on the nanoscale. Biologists strive to not only find molecules but learn what they do. Molecular manufacturing would provide the means to map cells completely and reveal the molecular underpinnings of disease and genetic disorder.

Materials scientists strive to make better products. With access to molecular manufacturing, a few tons of raw material would produce a billion cubic-micron sized samples and one laboratory could do more than all of today’s materials scientists put together. Moreover, molecular manufacturing would allow new materials to be built according to plan, making the field far more systematic and thorough than it is now. On a related note, car, aircraft and especially spacecraft manufacturers are obsessed with chasing the Holy Grail of materials science, which is to produce products that are both lightweight and strong. Reducing mass saves materials and energy. Products made of diamondoid would have an identical size and shape to those we make today but would be simultaneously stronger and 90% lighter. Constructing such material requires advanced mechanosynthesis, made available through molecular nanotechnology.

In fact, manufacturing as a whole continually strives to make better products, and so the natural endpoint is the precise, molecular control of complex structures — the very definition of nanosystems. As each individual field gets closer to this goal, it will become increasingly necessary to collaborate with other fields because nanotechnology is distinguished by its interdisciplinary nature. We see such developments occurring today. The most advanced research and product development calls for knowledge of disciplines that hitherto operated mostly independently of one another. The full potential of nanotechnology lies in the gaps between academic fields.

Molecular nanotechnology might require expertise in multiple fields, but currently the academic community is not geared towards such multidisciplinary research. This may well be a consequence of the division of labour that we talked about earlier, which favours specialists over generalists, but it has lead to a similar division in the research community, where proposals are evaluated by experts within one field who have little or no understanding of the developments in fields outside of their expertise. Where nanosystems is concerned, this has lead to psuedoexperts who debunk their own misunderstood concept of Drexler’s proposal. As Drexler explained, ‘a superficial glance suggests something is wrong – applying chemical principles leads to odd-looking machines, applying mechanical principles leads to odd-looking chemistry’. What is actually lacking is an inability to appreciate a deeper view of how these principles interact.

One particularly prevalent misconception (one I myself unfortunately made in part one) is to describe molecular nanotechnology in terms of ‘building things atom by atom’. This approach is rightly criticised as unfeasible, because unbound reactive atoms would react and bond to the manipulator (this was called the ‘sticky fingers problem’ by nanocritic Richard Smalley). But this fact is not a problem for molecular nanotechnology, because assemblers were never conceived as tiny tweezers picking up and positioning atoms in the first place. What these assemblers actually do is mechanically-guide reactive molecules, so whereas chemistry involves a lot of molecules wandering around and bumping together at random, assemblers would control how molecules react via a robotic positioning system that brings them together at the specific location and at the desired time. In short, Drexlerian nanotechnology applies the principles of mechanical engineering to chemistry. It is properly defined as ‘the process that uses molecular machinery to guide reactive molecules’; and it is a misconception to describe it as ‘building things atom by atom’. It is true that the construction of specific molecules is governed by the physical forces between the individual atoms composing them, and it is equally true to say that controlling the motions and reactions of individual molecules implies controlling the motions and destinations of their individual constituant atoms, but it is not true that molecular nanotechnology builds with individual, unbounded (and, hence, highly reactive) carbon atoms.  That is not the only misconception; there are many more, and they can all be attributed to the same root cause: Hardly anyone knows both chemistry and mechanical design. Fortunately, there is a solution.

Web 2.0 And The Building of Bridges

In part one, I said ‘if you want to be a nanotechnologist you have to have a grounding in chemistry’. In the 1960s, students in American grade schools and junior high schools were taught to use numbers written in base 2 (binary), as well as the more familiar base 10, because it was assumed that the approaching ‘computer age’ would require everyone to be adept at writing assembly language programs. The vision of computers in our working and social lives was certainly spot-on, but today hardly anyone even needs to know that all these machines ultimately do is addition and multiplication of integers, so neatly are these mathematical operations hidden beneath search engines, graphical user interfaces and pre-installed software packages. Engineers have long been used to computer-aided design software that can reduce what would have been weeks of work with a pen and paper to a simple click of the mouse. Now we are beginning to see CAD packages like NanoEngineer 1, a 3D molecular engineering program that ‘has been developed with a familiar intuitive user interface for mechanical engineers with experience using CAD… NE1 doesn’t even require the user to know much about chemistry to use it’. If simulations incorporated into CAD software can help engineers absorb knowledge of chemical rules without learning chemistry in the classic nose-in-a-textbook sense, that would be a step towards opening the bottleneck caused by a shortage of knowledgeable designers.

The metaverse may offer further solutions to the problem of bringing disparate teams of scientists and engineers together. Systems and applications tend to have a lot of complexity, which is largely due to the fact that such systems have grown from the machine up. IBM’s chief technology strategist, Irving Wladawsky, put it like this: ‘We do our machines, we do middleware, we do applications, then we put in a thin layer of human interface’. But, by recreating the person-person social and commercial interactions in an online 3D space, SL demands interfaces that give top priority to the user. ‘One of our biggest challenges is to make IT systems and applications far, far more useable to human beings. IT systems in business, healthcare, education, everything’.

Now, it must be admitted that ‘intuitive’ and ‘user friendliness’ are not words that immediately spring to mind when describing SL’s user interface. But in 2007 the viewer software was open-sourced, which would make the client software accessible for modification and improvement by ‘a bigger group of people writing code than any shared project in history, including Linux’ (Cory Ondrejka). A few companies have already taken up the challenge of improving SL’s client code. One such company is Electric Sheep Co. Its CEO explained, ‘LL has done extraordinarily well creating a platform for motivated early adopters, but they have not made the front-end experience ready for the mass-market. These barriers will be addressed very rapidly upon the adoption of the open source initiative’.

Meanwhile, Linden Lab recently announced its plans to team up with IBM to create 3D Internet standards, with the intention of eventually allowing users to ‘connect to virtual worlds in a way similar to the way users move across the Web’, seamlessly travelling from one online world to the next while retaining the same name, appearance, and attributes like digital assets. Also being worked on are ‘requirements for standards-based software designed to enable security-rich exchange of asetts in and across virtual worlds (allowing) users to perform purchases or sales with other people in VR worlds for digital assets’ (which, with the availability of desktop assemblers, would include the control files that instruct the system in how to assemble building blocks into finished product). And, yes, they are working on developing more user-friendly interfaces.

SL is but one example of a growing range of web-based applications and collectively these are developing in directions that may supply yet more solutions to the design bottleneck. In an earlier essay (‘The Metaverse Reloaded’) I pointed out that ‘the Internet is evolving… into a vehicle for software services that foster participation and collaboration’. Gwyneth Llewelyn’s way of explaining what ‘Web 2.0’ means was ‘List all possible media, list the word “shared” before it, and we’ve covered the whole spectrum of Web 2.0 applications’. Then she asked ‘is that all?’ which rather implies that tools which promote the sharing and ‘mashup’ of information  (i.e., combining two or more sets of data into an integrated whole, for example overlaying air-traffic control data over Google Earth) has little hope of generating genuinely new discoveries.

Yes, well, declarations like that reveal a lack of understanding that nanotechnology exposes the core areas of overlap in the fundamental sciences (physics, materials science, mechanical engineering, life sciences, chemistry, biology, electrical engineering, computer science, IT). Recall that the problem academia currently has in developing nanosystems stems from the fact that each discipline has developed its own proprietary systems vernacular, effectively cutting each one off from its neighbouring disciplines and making exploration of the gaps between scientific fields (where the potential of nanotechnology lies) almost impossible. Vernor Vinge proposed a way out of this dilemma that has obvious parallels with Web 2.0: ‘In the social, human layers of the Internet, we need to devise and experiment with large-scale architectures for collaboration (and) extend the capabilities of search engines and social networks to produce services that can bridge barriers created by technical jargon’.

As it happens, many scientists are beginning to use blogs as modern-day intellectual salons, and there is an increasing amount of science-based social networking sites and data-sharing tools. For example, the publishing group responsible for the journal ‘Nature’ have developed ‘Connotea’, which adds a toolbar to your web-browser that allows you to save a link whenever you come across an interesting reference. You then tag your references with keywords, which lets you share your bookmark library. There is much scope for social bookmarking applied to scientific research in bringing together once-disparate groups with hitherto unseen complimentary problems and solutions.

Being breezily optimisic for a moment, as online worlds develop into the metaverse, they will incorporate CAD tools that show, using quantum chemistry calculations, how molecular structures affect each other. This  information is not represented by coldly abstract equations and graphs, it taps into the visual and tactile senses we evolved to be past-masters at using. The maths is hidden deep within beautifully intuitive user interfaces. The gaps between disciplines are filled in by new generations of Web 2.0 collaborative tools which, together with the visualisation packages that allow specialists to see what was once only visible to the polymath, brings about a thorough exploration of nanosystem design space.

How Hard Can It Be?

It had better be like that. Because getting nanosystems out of the conceptual stage and onto the market is a formidable challenge. One of the deceptive things about molecular nanotechnology is that it sounds so simple. Descriptions written for the layperson compare molecules and their bonds to the parts in a tinker toy set, and coupled with a reference to Legos when explaining the utility of nanoblocks, the overall picture is one of child’s play. One person who made some headway in dispelling notions that developing productive nanosystems would be easy is Lyle Burkhead, the second person to join up as a senior associate of the Foresight Institute (a non-profit organization dedicated to the development of safe molecular nanotechnology). His reasoning begins with a reference to the one existing proof-of-principle that complex systems can be built from the molecular level up-life. Machines require struts and beams to hold positions; cables, bearings and fasteners to transmit tension and connect parts; motors to turn shafts and drive shafts to transmit torque. In biology, there are molecular structures that perform all of these functions. Nanosystems would need tools to modify work pieces, production lines to control devices and control systems to store and read programs. Again, nature shows us the feasibility of building such systems on the nanoscale. Enzymes and reactive molecules modify work pieces, ribosomes control devices and the genetic system stores and reads programs.

So what’s the problem? Well, Burkhead pointed out that ‘a general purpose, programmable system would be like a general purpose (programmable) ant colony. How much would you have to know about ants, their society, their genome, before you could make them programmable and able to build structures to specification?’. A further problem is that, because molecules and micron-scale blocks are so tiny, enormous numbers of them are needed in the construction of macro scale objects. By way of illustration, let’s compare the total amount of prims in SL to the amount of nanoblocks required to build a 1Kg object. I don’t know what the maximum number of prims SL can render is, but an attack of self-replicating spheres reached 5 billion. So, let’s assume that Linden Lab’s servers can handle a maximum of ten billion prims.  Well, the number of micron-scale blocks needed to build a 1Kg object is substantially greater: A million billion. Furthermore, each one of these blocks would themselves be built from a hundred billion molecular fragments — an order of magnitude above my hypothetical maximum number of prims. Drexler has calculated that a thousand people making a thousand design decision per second would require a century of eight hour days to design a single cubic micron. And then you would need a million billion lines of code to specify how 10^15 such blocks should be positioned to build that 1Kg object.

A nanofactory would be composed systems and subsystems whose components are beyond current engineering feasibility. I don’t think it’s too much of an exaggeration to say that Drexler’s ‘Nanosystems’ (published in 1992) would be comparable to some visionary conceptualising Second Life in 1832, when the computer existed only as principles outlined by Charles Babbage, and the communications system invented by Samuel Morse was still five years from realization. All of which begs two questions. How do you build a system when its complexity lies beyond technical feasibility, and how do you write a program with a million billion lines of code when such an endeavour is out of the question on grounds of complexity?

Backwards Chaining

The answer to both questions can be found in computer science. “Bootstrapping” is a term that describes the process of putting together a complex system by hooking up a number of more primitive parts. There is also an analytical tool engineers use known as ‘backward chaining’. The idea is that you start with a goal, and then work backwards through a series of intermediate steps until you arrive at capabilities that are currently accessible. Regardless of their size, manufacturing any machine requires two capabilities — fabrication of parts and assembly of parts. Assembly of parts can be achieved in two ways. You can either actively position parts in the desired location and orientation, a process known as positional assembly. Or, you can allow the parts to move at random until they ’settle in’ to the right position: self-assembly.

We already possess three enabling technologies that demonstrate at least a primitive parts fabrication and assembly capability on the nanoscale. Namely, biotechnology, supramolecular chemistry and scanning probes. Molecular biologists and genetic engineers have demonstrated the possibility of achieving positional assembly using microbes, viruses, proteins and DNA. Gerald Sussman of MIT said, ‘Bacteria are like little workhorses for nanotechnology; their wonderful at manipulating things in the chemical and ultramicroscopic worlds’. DNA has proved useful for nanoscale construction purposes, acting as a molecular scaffolding. Simple molecular machines have also been built from DNA. Researchers from Ludwig Maximillians University have built a DNA-based molecular machine that can bind to and release single molecules of a specific type of protein, and which can be made to select any of many types of proteins. A prototype of a nanoscale robot arm has also been constructed using DNA. This controllable molecular mechanical system has two rigid arms that can be rotated between two fixed positions.

Various protein-based components and devices have also been constructed. Kinesin motors attached to flat surfaces in straight grooves were shown passing 2.5 nanometer-wide microtubules hand over hand in the manner of a cilliary array. Ratchet-action protein-based molecular motors are well-known in biology, a special genetic variant of yeast cell prions have been used to self-assemble gold-particle based nanowires and, according to Rob Frietas and Ralph Merkle, ‘antibody molecules could be used to first recognise and bind to specific faces of crystalline nanoparts, then as handles to allow attachment of the parts into arrays at known positions, or into more complex assemblies’.

Supramolecular chemistry can build up complex molecular parts from simpler molecular parts. Kurt Mislow has synthesized molecular gear systems which ‘resemble, to an astonishing degree, the coupled rotations of macroscopic mechanical gears. It’s possible to imagine a role for these and similar mechanical devices, molecules with tiny gears, motors, levers etc in the nano of the future’. Also, Markus Krummenakar is developing molecular building blocks, with the intent of opening up a pathway that leads to a primitive, polymer-based assembler.

Protein engineering and macromolecular engineering are examples of self-assembly. A disadvantage is that solution-phase synthesis cannot provide orientation or positional control, and it has a maximum complexity of 1000 steps. Values in the upper range are seldom achieved, and lie an order of magnitude below the number of steps required in the assembly of molecular machine systems in Drexler’s most primitive design.

The third type of enabling technology, scanning probes, have provided experimental proof that molecules and molecular parts can be mechanically positioned and assembled with atomic precision. A group at the University of North Carolina have created an interactive haptic control system called ‘Nanomanipulator’. This device enables users to ‘feel’ the interatomic forces as atoms are pushed around on a surface, using a hand-held master-slave controller that drives a STM probe while the position of the atom is displayed on a monitoring screen visible to the user.

But, while reproducing a map of the world, a portrait of Einstein, or IBM’s corporate logo by positioning individual atoms is pretty impressive, it is still a very long way from the complex three dimensional lattices that would be required in molecular nanotechnology. Work is underway to build nanoscale attachments that will act as ‘grippers’ for binding and manipulating specific molecules. These grippers will emphatically not be tweezers, mechanically picking up atoms. Rather, they might be something like fragments of antibody molecules. The trick would be to get the “back” of the molecule stuck onto an AFM tip, which would then allow the “front” to bind and hold molecular tools. According to Drexler, ‘if you want to do something with tool type A, you wash in the proper liquid, and a type A molecule promptly sticks to the gripper… once the tip has positioned a molecule, it reacts quickly, about a million times faster than unwanted reactions at other sites’. Thus, such a modified AFM tip would enable the mutual positioning of the reactive groups, forging a chemical reaction at the desired location but nowhere else. Drexler has calculated that, because it would accelerate desired reactions by a factor of a million or so, a molecular manipulator could perform up to 100,000 steps with good reliability.

However, solution-phase synthesis is massively parallel, because a chemical reaction typically makes many trillions of molecules at once. In contrast, the AFM-based manipulator may be able to construct a large molecular aggregate, but it would do so one molecule at a time. Therefore, manipulator-made products would be trillions of times more expensive. Also, the procedure would tie up a very expensive scientific instrument for hours in order to build that one large molecule. If you wanted to construct another scanning probe microscope, it would take on the order of a million,million, million years to manipulate its own mass in molecules.

Needless to say, AFM-based molecular manipulation won’t be put to such purposes. Instead, it would provide chemists with useful information concerning the building blocks and assembled structures of the components needed to begin the multi-stage development of nanosystems. In order to begin stage one, we would need to develop more than 400 self-assembling building blocks, each assembled from 50 monomers apiece (a monomer is a molecule that is able to bond in long chains). These building blocks would self-assemble into folded polymers (long string of linked molecules) with 10-100 parts. Brownian assembly of medium scale building blocks of folded polymers would lay down the technical foundations required to attempt stage 2 in Drexler’s pathway towards nanosystems. This stage would require arrays of complex molecules that would serve as feedstocks, suspended in pure liquid rather than the solution of stage one’s working medium. The structural material being built from this feedstock would be cross-linked polymers, suitable building blocks of solution-based systems that would take an estimated hour to copy itself and several days to produce a macroscopic quantity of systems. Drexler explained that this would establish a technology base that would allow ‘advance… towards inert interiors, then towards more active reagents’. (A reagent is a chemical structure, such as a molecule, that undergoes change as a result of a chemical reaction).

These advances would allow us to develop the technologies required for stage three, systems that would be flexible enough to assemble a variety of different molecular building blocks. Structural material would still be cross linked polymers, but now systems are put together via positional assembly, rather than the folding block assembly of stage 2. Mechanochemical reactions would be capable of generating subassemblies from a smaller range of feedstock molecules. The technical know-how facilitated by stage 3 would allow progress towards devices made from diamondoid materials. The system would be able to increase the frequency of its operation, because the working environment would be suitable for more active reagents. Instructions and control, that hitherto had come from outside via acoustic pressure waves, would be replaced with internal control and data storage devices that could activate complex subroutines from brief instructions. These advances would give us the capability to build multiple production lines, and various other developments that would enable feedstocks to be simpler, less pure, and therefore less expensive. From there, we would have the technical capability to attempt stage four, desktop nanosystems whose assemblers would be made up of a billion molecules. Cross-linked polymers would have been replaced by diamondoid solids, assembled in a vacuum environment with each atom bonded in the exact place planned by advanced computational modelling.

Let’s recap this backwards-chaining analysis in Drexler’s own words. ‘A series of steps can enable a relatively smooth transition from solution-phase assembly of monomers… to the assembly of diamondoid mechanisms in an inert (eventually, vacuum) environment using highly reactive reagents’. Simplifying further, developing productive nanosystems would require the ability to build complex macromolecular structures in a solution environment, and eventually the mechanosynthesis of macroscopic structures in a vacuum environment. But now we have fallen into the trap of downplaying the challenge of developing molecular nanotechnology again. Saying that you need to build macromolecular objects and macroscopic structures in order to develop nanosystems is akin to saying you need to combine suitably shaped prims and write scripts in order to build the content of Second Life.

Notice that, while builders  are concerned with combining prims and writing scripts, putting it like that does not convey a sense of the enormous practical challenges met by SL’s creative community. It brushes over the long, hard work demanded by the host of sub-problems that were generated during every step along the path towards building the content of this online world. I would argue that building and scripting the content of SL amounted to researcher-centuries of effort. And you can be quite sure that the pathway towards nanosystems will also be plagued by problems and sub-problems, demanding interdisciplinary collaboration of the fundamental sciences, amounting to researcher-centuries of effort.

However, it’s worth remembering that the successful realisation of stage four molecular manufacturing would give us digital control over matter, and therefore building the next nanofactory would be a whole lot easier. Why? Well, recall that in SL, getting an idea from concept to finished build takes some effort, but once completed you can easily duplicate it. A well-designed molecular manufacturing system would essentially treat atoms as bits, and a practical design for its molecular mills, assemblers and all other components requires them to be made from materials they can handle. So what? Because, as Damien Broderick explained, ‘it might cost a zillion dollars and exhaust the mental reserves of an entire generation…but once it’s there in its vacuum tank, once its specs are in the can… it will make its twin, and they’ll make another two, so you have four, then eight, then sixteen…and by god at the end of the day you will look into your garden, at your handiwork, and you will see that it is good’.

Coming in Part 2, I look at claims that molecular manufacturing will make everything for free, that it will cause a new caste system to emerge, and what kind of economy SL/Nanotech really works under.

Print Friendly, PDF & Email