Snowcrashing Into The Diamond Age: An Essay By Extropia DaSilva

PART ONE: SL AND THE GRAY GOO PROBLEM.

When Second Life launched in May 2003, it attracted a citizenship not unlike the Internet’s Usenet group of the late ‘80s and early ‘90s. Toward the end of 2006, a software tool known as Copybot went on sale, and for a brief while our metaverse reflected the web of the late ‘90s with its Napster-related controversy of peer-to-peer and open source versus IP theft.

2003-2006. Three years, condensed into which were events that defined the growth of the web over more than a decade. I wrote in a previous essay (‘The Metaverse Reloaded’) ‘the pace of change is quickening’ and you might take this as further proof. But I want to talk about something else the Copybot controversy highlights: Namely, the fact that history repeats itself; it rhymes.
Who were the first people to be affected by a Copybot-style threat to their livelihood? Well, I can assure you that it was not Metallica. In fact, to find the first Copybot one must go back in time to the 18th century. At that time, in Nottingham, England, the equivilent of SL’s content designers were the weavers who hand-crafted fine stockings, lace and other quality fabrics. The SL sellers’ guild feared that widespread ability to freely copy content would threaten their income. The weavers’ Copybot arrived with the invention of the power loom and other textile automation machines. An abrupt change in economic power occurred, slipping from the hands of the weaving families to the owners of the machines.

The shop owners of SL formed groups to protest against the selling and use of Copybot. By 1812, the weavers had formed a secret society. They made threats and demands of factory owners, many of whome complied. But this is where divergence happens. When the SL protests reached the ears of the Powers That Be (the Lindens) they eventually changed the terms of service to make using Copybot to replicate IP an offence. As the weavers’ increasingly guerilla tactics escalated into bloody battles, their actions also attracted the attention of the Powers That Be (the Tory government). But in this case the outcome was not in the weavers’ favour: Their group dissolved with the imprisonment and hanging of prominent members, and machines continued to displace workers as the Industrial Revolution steamrollered on.

We all know now that the arrival of the machines opened up new markets and more lucrative employment for people who could design, manufacture and market them. No doubt the widespread proliferation of Copybot would also have opened up opportunities for people enterprising enough to see beyond merely ripping off the hard work of others. But then, the reality of lost jobs is often more compelling than some indirect promise of new opportunities in new markets.

I feel it is safe to assume that the clash of enterprises that began with weavers versus machinery and was most recently acted out as SL sellers’ guild versus Copybot will rear up again in the future. This is most apparent when you consider the implications of nanotechnology and artificial intelligence. So far in my essays, I have considered their potential to surpass the limitations imposed by our biology. But nano and AI have implications for civillization as a whole. The former may herald a new industrial revolution: The ability of desktop ‘factories’ capable of building any physically-possible product out of molecular fragments or even atoms. And a famous dystopian scenario for the rise of truly intelligent robots is the loss of all jobs and a slide into an Eloi-like decadent state for the human race.

While we probably don’t have strong AI existing in SL as of yet, our metaverse does have a kind of molecular manufacturing. Everything you see in SL was created through the ‘atomistic construction’ of assembling and manipulating prims. One could argue, then, that SL is not only a realisation of Stephenson’s ‘Snowcrash’ but also a virtual representation of his other novel, ‘The Diamond Age’ in which a society deals with the arrival of widespread matter compilers. Our metaverse has seen various attacks from self-replicating objects that were named ‘grey goo’ after Eric Drexler’s scenario of endlessly-replicating nanobots. Analogies exist, then, but how useful are they? To what extent does our metaverse prepare us for a future of molecular manufacturing and intelligent robot workforces?

Imagine how difficult it would be to build a machine if every screw could not be made to fit every corresponding nut. The 19th century’s industrial revolution happened, in part, because engineering progressed towards microtechnology. This enabled the crafting of precision, uniform parts, which is essential for mass-manufacture. The 21st century sees us progressing towards nanotechnology, with the prefix ‘nano’ referring to ‘a billionth of a metre’. This is almost too small to imagine, it is approaching the atomic, and yet many appliances have some components built to this scale already. In that sense, the nanotech era is here. There are, however, two kinds of methodology. One kind of nanotech involves reducing big things to sizes so small their behaviour changes. Grind a Coke can into nano-sized dust particles and you have a fuel-air explosive. Or, hitting a batch of pure carbon with a special laser causes the atoms to re-arrange themselves into new molecular forms of carbon called buckyballs and nanotubes. The latter have the electrical conductivity of sillicon and the heat conductivity of diamond. Efforts are underway to use them for computing switches and circuitry far surpassing today’s best.

The other methodology comes literally from the opposite direction, because the ultimate goal is to build things atom by atom, using armies of machines the size of molecules. Futurists referr to this method as ‘molecular manufacturing’ and the practical realisation of its ultimate goal is known as an assembler. If we should succeed in mass-producing assemblers, the result would be a second industrial revolution orders of magnitude more powerful than its predecessor. Imagine going into a virtual store in the metaverse and purchasing something, a laptop computer perhaps or a diamond and saphire necklace. Now imagine that, rather than waiting for it to arrive in the post, instructions are sent via the Internet to a box that sits on your desk. Inside the device, tiny machines follow instructions that tell them to grab appropriate atoms from feedstocks and use them to assemble the thing you ordered.

So, who is working on these assemblers? That is the wrong question to ask because they are actually the inevitable end result of a trend towards miniaturization that pervades all technology. Microtechnologists strive to make smaller products; materials scientists strive to make useful solids; chemists endeavour to synthesize more complex molecules and manufacturing as a whole strives to make better products. In short, it is all about developing methods of rearranging atoms at increasingly fine-grained levels. This is in no way an explicit goal. Chemists are not seeking ways to synthesize complex molecules just because they want to build assemblers, they are motivated by other concerns. The same applies to the other groups but nonetheless continued success in their respective areas will take us, step-by-step, towards nanoscale mechanical systems capable of building complex structures with atomic precision.

Another reason why it is not an explicit goal is because, right now, we lack the technological capability to build one. It would be like the Wright Brothers building a supersonic jet fighter. Explicit research is instead focused on providing proofs-of-principle of the core components of an assembler. The theoretical work is largely thanks to K. Eric Drexler whose book ‘Nanosystems’ brings together the conceptual and analytical tools required to understand molecular machines and manufacturing. A decade on from its publishing, every aspect of its conceptual designs have been validated through additional design proposals, supercomputer simulations and the construction of actual molecular machines and machine parts. Again, I must emphasize that these experimental machines are probably further from a practical assembler than the Wright brothers’ first airplane was from a modern jet fighter. But they do provide proof of principle. No serious objection to the feasibility of molecular manufacturing stands up to scrutiny.

The critic that says molecular manufacturing is impossible is like those that denied the possibility of flying machines. But another kind of critic that aught to be listened to are the ones who worry that it is potentially exceedingly dangerous. On this issue, there is universal agreement but much disagreement about what to do about it. At one end of the spectrum, one finds the opinion that its potential dangers are so great nanotechnology should be abandoned as a research project. Its polar opposite throws up its hands, declares the arrival of assemblers inevitable, and points out that a blanket ban would require totalitarian regimes of Orwell’s darkest visions. In between these extremes are various levels of relinquishment.

Of all the dangers implied by molecular manufacturing, the most well-known is ‘gray goo’. Assemblers build things atom by atom but not literally one at a time. If they did, even building something the size of this full stop ‘.’ would take millions of years. The idea is to use trillions upon trillions of nanomachines bringing a similar amount of atoms to desired locations simultaneously. But, how do you manufacture that vast army of nanomachines in the first place? With self-assembly. Imagine a nanobot floating in a bucket containing a soup of molecules. It grabs carbon atoms and builds its twin. The two make four… eight… and the numbers continue growing exponentially, stopping only when resources run out or a command to cease replicating is given. ‘Gray Goo’ refers to runaway self-replication, where a malicious person commands them to replicate endlessly, or some error results in the command to cease going unheeded. Now, our hypothetical nanobot is built out of carbon atoms, a reasonable choice given that its ability to form four way bonds makes it ideal for molecular assemblies. Life itself is built largely out of carbon for the exact same reason, but that means all life could potentially be a convenient source of carbon atoms for these endlessly multiplying nanobots…

Building one nanobot would require roughly a million atoms. The biomass contains roughly 10^45 atoms. 10^39 copies of nanobots would be the number that entirely replaced the biomass, which would be reached by the 130th replication. From start to finish, all life on earth could be consumed by a technological plague in roughly three and a half hours after the command to go forth and multiply was given!

This is, for now, only a hypothetical threat. We have some time to learn how to prevent it. I have sometimes wondered if Second Life could help in some way, particularly in light of the fact that Linden Labs are already having to deal with self-replicating prims ‘consuming’ precious computing resources. I raised this possibility at a Thinkers Discussion, where Nite Zelmanov commented ‘there may be some lessons in terms of how to approach the problem, where to attack the replication, but I think “real gray goo” is going to require mostly chemists and nano biologists to stop, not network and database techs’. Land Box’s comment was ‘curious too if we should or even could have some kind of centralised control over replication ability. I can’t think of a real world analog of the force of a Linden stepping in when things get out of hand.’

That last point is worth emphasising: The real world and a virtual country are not exactly the same and so we must be careful not to stretch comparisons too far. I have sometimes wondered if an SL resident could unleash a gray goo attack accidentally, or if it had to be intentional. I am not a scripter, so I did not know the answer, but Land Box observed ‘people who make replicating prims to build an object can mess up conditions meant to limit the count because they accidentally reset the parent counter on rez’. So it looks like an accidental outbreak in SL is not beyond the realms of possibility.

But what about an RL outbreak? It is commonly assumed that such an event is a definite possibility but, in actual fact, from where we CURRENTLY stand an accidental gray goo catastrophe is impossible. The key to understanding why this is so is to appreciate what is necessary to arrive at the first step, creating the first nanobot. For this ’bot to have the ability to self-replicate, it must have 5 capabilities integrated into one small package. It would have to be mobile; able to travel throughout the environment. It would need to have a shell to protect it from ultraviolet light. It would need to contain within itself a complete set of blueprints and have the means to interpret them. It must be capable of metabolism, breaking down random chemicals into simple feedstocks. Lastly, it must have the ability to turn that feedstock into nanosystems.

Are all five capabilities necessary for molecular manufacturing? Certainly not. There is NO reason for nanobots to be mobile; NO reason for them to use natural resources as raw materials and NO reason for them to contain the complete instruction set to guide their replication within themselves. The second fact provides us with a sure way of preventing an accidental outbreak. My hypothetical nanobot was built out of carbon atoms. That was a deliberate simplification. In actual fact, molecular manufacturing (both biological and technological) requires hydrogen, nitrogen, oxygen, fluorine, sillicon, phosphorous, sulfur and chlorine as well. But UNLIKE biological systems, nanomechanical systems can be designed to require chemicals not commonly found in nature. In a factory setting, where adequate supplies of all needed parts can be provided, rigid and inflexible systems incapable of manufacturing everything they need for replication from naturally-occurring compounds would be FAR more economical than nanobots with all the replicating capabilities of bacteria. We cannot discount the possibility that such a device will be built, just as we cannot be absolutely sure that nobody will ever build a house with a nuclear bomb in the basement, wired to a button with the legend DO NOT PRESS! in bold red letters below it. But we can be sure that such a dangerous set-up could not happen accidentally.

If we return to SL, though, we find little comfort in the notion that gray goo bots cannot exist without idiots willing to first build them. The Lindens are constantly dealing with griefers and their self-replicating prim attacks, and the Web as a whole is a breeding ground for malicious worms and bots. Again, we must emphasise the contrasts between RL and SL. Griefers are a mischievous lot but are not necessarily out to cause real, physical harm. The vast majority of cyber criminals would not design viruses that could actually kill anyone. But it is true that cyber criminals produce worms of increasing complexity and even now these could be potentially lethal. After all, our modern societies depend heavily on computers for mission-critical tasks ranging from operating our call-centres to flying and landing airplanes, from handling our financial transactions to guiding intelligent weapons. Malicious code could be more than a mere nuisance if it infected systems like that. Molecular manufacturing is the means by which we remove the boundaries separating RL and VR. This is because it will give us the means to feed signals coming from cyberspace direct to our brains, and because it will enable us to bring the morphing and replicating quality of prims to our RL environments. If we assume that there is a competition to create a software virus that beats your peers’, how can we be sure that nobody will seek the kudos of constructing the ultimate technological plague?

This dilema is further compounded by considering the fact that if a terrible weapon can be built, and if acquiring such a weapon guarantees a decisive victory, no country wants the enemy to get it first. Consider, also that past weapons of mass destruction such as nuclear bombs required the financial and industrial might of governments in order to be built. It is simply not possible for terrorists to gain access to such devices unless some government supplies them. They are also hideously indiscriminate, killing anybody in their blast radius regardless of any labels we might have invented to separate ‘us’ from ‘them’. But molecular manufacturing ushers in a new era called ‘knowledge-enabled destruction’, which has two meanings. First, it refers to the fact that it could ultimately be built without requiring large facilities or rare raw materials. In most utopian scenarios, desktop assemblers capable of building anything are as widespread as desktop computers. So we have a device capable of building mechanical viruses sitting alongside a device that for decades has amassed a vast pool of knowledge in building software (and even biological) viruses. What a nice combination for terrorists the world over. Secondly, knowledge-enabled destruction refers to the fact that gray goo can be programmed to have various levels of potency. It could be designed to infest a particular region, while leaving neighbouring countries unaffected. It could be designed as a plague that attacks certain races. It could be designed to brainwash people by re-wiring their brains at the molecular level, or used to tortue them by causing their pain receptors to fire. It could act as a surveillence tool invisibly monitor groups or individuals. And never forget that the person or persons programming them to do all this and more can apply conditions that ensure THEY are not affected.

Yet another worrying fact is that it would be extremely hard to monitor a country for signs of clandestine nano weapons manufacturing. Spy satellites can pinpoint illegal nuclear weapons facilities (particularly if such weapons are tested) but they cannot peer into garages and bedrooms the world over. A world with molecular manufacturing could be very paranoid indeed if it is effectively blind to illicit weapons programs and the only way to provide effective survaillence is through, yes, nanobots.

So while I feel accidental goo plagues are impossible, deliberate misuse of nanotechnology is a distinct likelihood. Calls for widespread relinquishments are self-defeating, as a consideration of the history of software viruses shall show. When Fred Cohen succeeded in programming the first such virus at the University of California in 1983, this approved study was intended to show such things could exist, thereby ensuring the real intention of designing defences would receive backing. This plan did not work, because demonstrations alarmed authorities who immediately called for a ban on any work connected to it. One American official even declared that the State department would not have allowed Cohen to SPEAK about viruses if it had been known his talk would focus on such things. It was not until 1987 that his paper detailing the feasibility of viruses was accepted for publication in the journal ‘Computers and Security’.

By then, of course, others had independently discovered the virus. Playful and malicious programmers launched a de-facto a-life research effort that began with the likes of ‘Festering Hate’ appearing on the Apple II, and which would eventually lead to today’s MyDoom and airborne viruses spreading via cellular phones. And yet, despite their increasing complexity, bots and worms remain largely a mere nuisance. They have not caused the utter collapse of computing systems and this is obviously because an ‘immune system’ evolved in response and is largely effective. This success took place in an industry with no regulation and minimal certification for practitioners.

Not surprisingly, some advocate a similarly regulation-free environment in which to develop molecular manufacturing. This stance usually comes with the statement ‘a terrorist does not need to wait for his inventions to pass the regulatory process’, thereby pointing out that only people working in defences are slowed down by the regulatory bodies, while in black markets the world over their antagonistic counterparts surge ahead. It is commonly agreed that molecular manufacturing can be made inherently safe if a ‘prime directive’ stating that ‘replication should require materials not found in the natural environment’ is universally adopted. It is also agreed that a ban should be placed on self-replicating machines that contain within themselves the code to guide their replication, such codes coming instead from a trusted central server. These codes should be both encrypted and time-limited. A certain level of self-replication is necessary for molecular manufacturing to be useful, so we must strive to find a balance between useful and harmful levels.

This idea of time-limited replication sounds like Andrew Linden’s ‘grey goo fence’ to me. Aparrently, ‘the way it works is it counts the rate of IIRezObject(), IIGiveInventory() and GiveInventoryList() for a particular family of objects (owner_id and asset_id are used to key the counts) and any objects that exceed the threshold of 240 events per 6 seconds with a 12 second limit will fail to rez’. In plain English I think this means a prim can only replicate a certain amount of times at a certain rate. Pixleen Minstrel called the grey goo fence ‘a sort of compromise between unfettered prim rezzing, and a pay-as-you-go restriction on prim creation’. A drawback is that there is always somebody who sits right on the boundary between fair and unfair use. The best Lindens can hope for is that residents falling into the category ‘legitimite users adversely affected by the fence’ are kept to a bare minimum.

I do not consider it entirely silly to suppose that the lessons the lindens and the SL community as a whole learn from distinguishing gray goo from simple vigerous objects will serve us when our technological expertise makes molecular assemblers practical. Of course, the numbers involved in capping replicating prims cannot be directly applied to nanotechnology. I do not know what sort of numbers a prim has to reach before it becomes a nuisance- hundreds of thousands? Millions? Two hundred? Whatever it is, if we then say ‘right, nanotechnology must be similarly limited’ it would be quite useless, because such figures are far too small when dealing with atoms and molecular fragments. We need to think in numbers of trillions and above.

Another method the Lindens are working on are ‘technical options which will allow only trusted residents to fully utilise LSL across the grid’. It is planned that ‘trusted residents will be clearly defined, and there will be processes in place (not all payment regulated) to become “trusted” if your account falls outside of that designation’. The trick here will be for LL to distinguish between people who work to be among the chosen few because they genuinely wish to contribute, and those who seek acceptance but will abuse that position just as soon as it is granted. Although clearly not intentional, each and every Linden offers a challenge whenever new defences are integrated into SL: ‘are anybody’s coding skills “kung fu” enough to defeat mine’? You can be sure that somebody, somewhere, will have both the patience needed to pass the certification process and the mindset needed to be a griefer.

Again, it will be a matter of balance. The Lindens cannot make the verification process so straightforward that it presents no obstacle to any griefer with a modicum of patience. On the other hand, it cannot be so harsh that legitimite builders and scripters are put off. After all, it is the founding principle of SL that a metaverse is only practical if its users are given the power to collaboratively create the content within it. I would imagine that what Linden Labs actually means by ‘trusted’ residents is ‘residents willing to forgo a certain amount of privacy and be accountable for future actions’. Daring Petricher said as much on the official Blog: ‘to be “trusted” means you have the required information on file that allows punishment for whenever offences are committed’. Note that this comment does not assume the cerification process eliminates griefing and gray goo, only that it will lead to swift accountability after an attack happens. It is true to say that Linden Labs have two concerns when it comes to gray goo: Minimising attacks on the grid and clearing up the damage caused by anything that slips past defences. To this end, there exists a blacklist; rules describing content the Lindens want deleted. Plans are also underway to ‘add intelligence to (the gray goo fence) in an effort to make it smarter about distinguishing real gray goo’. This sounds to me like work is underway to develop an autonomous immune system.

Both limiting the level of molecular manufacturing available to the populace at large and designing immune systems to deal with gray goo have been suggested as prudent steps to take in face of the real thing. It is proposed that assemblers should come in two classes, ‘experimental devices’ and ‘approved products’. The latter would be the fruits of limited assemblers that assemble approved products, but neither the products nor the assemblers themselves would posses the ability to self-replicate. Once they had passed regulations, limited assemblers could churn out approved products inexpensively and abundantly, but they will NOT have the ability to make anything in the world. THAT ability will belong only to experimental devices. They are the equivilent of those P4 containment facilities in which biologists handle lethal bio hazardous material in environments totally sealed from the outside world.

The question of who gets access to the experimental devices depends upon whether one thinks practically or ethically. Practically, anybody could be given sealed environments within which they could experiment to their heart’s content. On the nanoscale, one cubic micron is a large space, enough for millions of components, and a few microns amounts to a large laboratory space. On the micron scale, a centimetre is an enormous distance. A micron scale device surrounded by walls a cm thick would be like a person surrounded by walls kilometres thick. If you want to incinerate a micron-scale device in an instant, it requires just a spark of static electricity to do so.

Given all that, it would be perfectly possible to devize a containment facility no larger than a sugar cube, packed with layer upon layer of fail-safe devices that ensure anything inside it cannot get out. Also, the device need not necessarily contain real nanomachines. It could be a computer that simulates the laws of physics at the molecular level. It has been calculated, after all, that one cubic inch of nanotube circuitry would be 100 MILLION times more powerful than a human brain (which, at 20 petahertz, is not exactly lacking in raw power). We are already making progress toward modelling nanoscale systems in software. The company nanorex has produced a software tool called Nanoengineer that allows users to quickly and easily design molecular machine systems of up to perhaps 100,000 atoms in size, then perform various computational simulations on the system such as energy minimization. Robert Frietas described it as ‘a CAD system for molecules, with a special competence in the area of diamondoid structures…users creating designs for relatively complex nanomachine components’. Limited to 100,000 atoms, the software can only simulate parts such as bearings, gears and joints. To attempt even basic simulation of a complete nanobot would require molecular mechanics simulation of 10-40 billion atoms, which is just barely possible with today’s supercomputers. Of course, TODAY’s supercomputers are 1/20 as powerful as the human brain. Those nanocomputers will easily have the grunt to perform fine-grained simulations of entire nanomachines.

In ‘Escaping The Guilded Cage’, Linden Labs’ Cory Ondrejka commented ‘the computational power required to fully simulate a motorcycle down to the chemical energy in its internal combustion engine is currently beyond server hardware… while computing a real-time simulation of complex mechanical or chemical processes are years away, each doubling of computer performance brings atomistic creation closer… atomistic creation allows the system to smoothly expand what it simulates’. Now, I dare say Gwyneth Llewelyn is rolling her eyes and chuckling at my naiveity but… If we could combine huge computing resources, molecular mechanics simulations and collaborative atomistic construction would the result be Second Life with fully integrated realtime CAD design for virtual nanomachines and micro machines as well as motorbikes etc? Moreover, if the simulation was accurate enough, wouldn’t we have confidence that successful designs within SL would work for real?

At this point, I want to turn to the ethical question. Earlier, I quoted Nite Zelmanov as saying ‘real grey goo is going to require mostly chemists and nanobiologists to stop, not network and database techs’. Where we CURRENTLY stand along the road to molecular manufacturing, Zelmanov is right to emphasise chemistry. If you want to be a nanotechnologist, you have to have a grounding in chemistry. But nanotechnology will ultimately arise in the theoretical space where the fundamental sciences overlap: Not just chemistry but biology, life sciences, material sciences, mechanical engineering, physics, electrical engineering, computer science and information technology. Advances in each of these areas is progress toward molecular manufacturing. At a certain level of maturity, information becomes the same thing as a physical object.

I happen to own a copy of ‘Nanosystems’ and frankly my dears, its equations make it beyond my comprehension. But that software tool Nanoengineer and its future successors may very well provide interactive simulations that allow me to visualise and intuitively understand those equations. Experimental devices may not let anything physical escape, but they DO allow information to get out. Knowledge, perhaps, on how to successfully create a nanobot that incorporates all five requirements for runaway self replication. Remember, at a certain level of sophistication, nanotechnology makes information the same thing as a physical object. Recalling ‘knowledge-enabled destruction’, Bill Joy commented ‘if you require p4 containment then keep information about it under the same kind of wraps’.

When a self-replicating prim attack infests SL, the Lindens can close access to the grid and sort the mess out. It is inconvenient but hardly catastrophic. An outbreak of real gray goo, though, may very well be precisely that. No wonder, then, that some people fear the consequences of unfettered access to experimental devices. On the one hand, it would ensure that the astonishingly creative and innovative community behind open-source software would turn its hand to devising defences. On the other hand, as designs become available for more and more nanodevices the chances of someone figuring out how to combine them to make a dangerous replicator increase. The fact is that any active immune system capable of effectively combating gray goo would require, you guessed it, gray-goo-enabling technology. As is the case with LL and access to LSL, the goal must be to draw the boundary loosely enough to cause little difficulty for legitimate work, while making dangerous activities very difficult indeed.

One comfort we find when contemplating the nightmare scenarios of Joy et al is that they are not entirely realistic. They portray a gray goo attack as if it were unleashed on today’s unprepared world. In reality, the developmental progress that takes us from where we are today to full-scale molecular manufacturing may very well enable us to work out the best methodology for minimizing risk and maximizing potential. Although the path to molecular nanotechnology is not absolutely clear, we are making good progress in outlining such a roadmap. Frietas wrote: ‘First, theoretical scaling studies must be used to access basic concept feasibility. These initial studies would then be followed by more detailed computational simulations of specific nanorobot components and assemblies, and ultimately full systems simulations, all intergrated with additional simulations of massively parallel manufacturing processes from start to finish consistant with a design-for-assembly engineering philosophy. Once molecular manufacturing capabilities become available, experimental efforts may progress from component fabrication and testing, to component assembly, and finally prototype and mass-manufacture.’

As we move from the drawing board, to computer simulations, to laboratory demonstrations of mechanosynthesis, to component design and fabrication, to parts assembly and intergration, and finally to device performance and safety testing, it is my hope that the metaverse will grow in sophistication and provide effective methods for safely designing mature nanotechnology. As we have seen, we can use SL right now. We currently stand at the point where we have strong theoretical studies of nanomachines and are just beginning to aquire software tools for simulating nanomachine components. We can use SL as a communications medium to hold discussion groups debating the societal impact of molecular manufacturing, and no doubt use the build, scripting, machinima options that SL allows to make these discussions more lively than the purely text-based debates of message boards. And these need not be uninformed opinions, because the Internet has excellent resource material for every aspect of nanotechnology. It behoves us all to understand the implications of this powerful technology.

If you happen to be the sort of person who prefers to deal with real problems in today’s world, SL gives you the opportunity to get involved too. Whenever a Linden Town Hall Meeting turns to topics like minimizing self-replicating prim attacks, or how to filter out griefers in verification processes, your contributions not only work towards making Sl more enjoyable, they also work towards practical solutions for combating irresponsible uses of molecular manufacturing.

The next phase hinges on whether or not our fledgling metaverse will ever allow simulations of molecular machines. Ondrejka wrote: ‘Commodity servers can currently simulate around 10,000 objects that range in scale from a centimetre to tens of meters, with many objects engaging in behaviour and physical interaction at any time. The real world operates at a much smaller scale, from 100 times smaller for mechanical systems to 100,000 times smaller for chemical and biological processes’. It is revealing that he does not discount the possibility of realtime simulations of complex mechanical, biological and chemical processes, only noting that they are ‘years away (with) every doubling of computer performance mov(ing) atomistic creation closer’.

Remember, the roadmap expects full simulations of machines to be possible before the stages of component design and fabrication are reached. And there are many stages after component design to pass through before full nanomachines are feasible. This may give us an opportunity to experiment with simulations of nanobots without worrying that they will be physically manufactured. Based on current laboratory demonstrations of silicon electrophotonics, we can predict that future household Internet connections will run as high as 10 gigabits a second — 10,000 times faster than today’s broadband. And detailed designs are now available for computers a billion times faster than today’s PCs. A future SL built on that kind of technology may very well be able to handle realtime simulations of self-replicating nanotechnology and SL residents could have fun designing simulated battles between gray and blue goo (‘blue’ goo is what nanotechnology designed to combat runaway replication is called). By the time technology has reached the point where full nanomachines can be built, we may very well have complete blueprints for designing powerful nanodefences.

With all this talk of apocalyptic catastrophe, the reader would be forgiven for wondering why we would want to pursue nanotechnology. One reason may be that nanotechnology will allow us to blur the boundaries between RL and SL, and so create a metaverse beyond the imagination of Neal Stephenson. For one thing, nanotechnologists like Robert Frietas have drawn up conceptual plans for in vitro fibre networks that can be threaded through our capillary system and interface directly with the brain at the neuron level for realtime brainstate monitoring and full-immersion virtual reality. We are also beginning to see laboratory proofs of principle that demonstrate the feasibility of mind/machine symbiosis. I have written about all this before so I shall concentrate here on another way to blur the boundaries: Using nanotechnology to bring the morphing qualtities of computer graphics to RL. This will come about because of a field known as ‘dynamic physical rendering’. Intel are pioneering this field and they describe its goal as ‘physical, moving, three-dimensional replicas of objects or people, so lifelike that human senses accept them as real.’

DPR is a branch of technology that takes advantage of the research driving the computer industry, which is all about learning to design, power, program and control a densly-packed set of microprocessors. The idea is to build a ‘catom’- a ‘claytronics atom’, a sphere that is ideally one millimetre in diametre. Each catom would have to incorporate 4 capabilities: Computation (we can already fit reasonable amounts of computation onto the space available on the surface of a 1-2mm sphere). Motion, a catom will move itself around by energizing a particular magnet and co-operating with a neighbouring catom. If one catom is held rigid by links to its neighbours, the other will swing around the first, rolling across the fixed catom’s surface and into a new position. Power. We must develop a way of powering catoms without using bulky batteries and wired connections, perhaps through connecting just a few catoms to resistor-networks and using router algorithms to distribute power throughout the ensemble. Finally, communication. A catom must be able to work with an ensemble of millions or billions, each catom capable of as many as six axes of interconnection. This will require routing techniques that focus on the location and function of catoms at a given point in time, and building communication highways within ensembles to limit the complexities of the routing problem.

So, a 1-2 mm sphere with all four capabilities is a ‘catom’. If you have millions of these catoms the ensemble gives rise to a rather wonderful-sounding capability dubbed ‘claytronics’ or ‘programmable matter’. This stuff would be capable of forming physical replicas that mimic the shape and appearance of a person or object modelled in realtime using more advanced versions of current 3D image capture used in films like ‘King Kong’. According to Intel, claytronics will let you ‘reshape or resize a model car or home with your hands, as if you were working with modelling clay‘. It would effectively enable us to work with physical CAD models, and of course you could collaborate with colleagues distributed all over the world, just as you do now in SL. As a person at one location manipulated the model, it would be modified at every location. The same meeting environment, with people and objects, could appear at each location, in real form or as replicas. A movement or interaction at any location would be reproduced at all of them. It may even be possible to reproduce how something feels and responds to touch (how ‘squashy’ it is, for example). Claytronics may allow a hug between SL friends to become indistinguishable from RL bodily contact.

At a millimetre in diametre, a catom is too big to qualify as a nanodevice, though it will no doubt take advantage of advances in nanoscale assembly. The main challenge, of course, is to arrive at the stage where billions or even trillions of catoms can be purchased at a price affordable by the average person. Right now, this seems impossible. Bare in mind, though, that in 1969 the idea that Intel would be able to sell millions of transistors for pennies would have seemed ludicrous. Moore’s Law, which currently allows 800 million transistors to pack onto Intel’s next-generation chips, has a habit of making unaffordable technology cheap enough to give away.

Nanotechnology allowing us to combine the morphing, shape shifting qualtities of VR with the physical qualities of RL objects? When you consider the imagination behind some of SL’s builds, you begin to see that assemblers and their anticipated ability to churn out endless copies of any and every material desire are one of the LEAST interesting possibilities of molecular manufacturing. But while they may not be particularly imaginative, they would almost certaintly be economically significant. All our current economic models are based on the allocation of scarce resources but the assembler would herald an ‘economics of abundance’. They would also, let’s face it, be as potentially threatening to existing markets as copybot was feared to be by the SL seller’s guild. So won’t the Powers That Be block their development?

This is what we shall investigate in Part 2.