CTRL-ALT-R: Rebake Your Reality (Part 2): An essay by Extropia DaSilva.

Part 2 of this essay published here with the permission of Extropia DaSilva. — Gwyn

Extropia DaSilva pictured by Miulew Takahe

“My God, it’s full of stars!”

– Arthur C. Clarke.

INTRODUCTION: THE PROBLEM WITH SCIENCE FICTION.

Do these quotes remind you of anything?

‘There were seven girls waiting there, all floating just off the tatami. Except the one sitting by herself, at the end of the imaginary table, was a robot’.

‘The people are pieces of software called avatars. They are the audiovisual bodies that people use to communicate with each other in the metaverse’.

They might sound like descriptions of Second Life® but they were actually written roughly a decade before Linden Lab® launched its online world. The first quote comes from William Gibson’s ‘Idoru’ (1996) and the other comes from Neal Stephenson’s ‘Snowcrash’ (1992). At any given moment, millions of people are engaged in activities that bring to life the predictions of writers like Gibson and Stephenson, experiencing life vicariously through digital avatars and building entire worlds within the abstract space of virtual reality. This is hardly the first time that works of science fiction have successfully predicted the future. Writers in the 1950s penned fantastic tales about explorers in space and in the 19th century Jules Verne anticipated submarine warfare.

Of course, one has to be careful when proclaiming the likes of Stephenson and Gibson to be prophets, forseeing the likes of SL and World of Warcraft years before their time. Certainly, those blockbuster MMORPGs and online worlds came after ‘Idoru’ and ‘Snowcrash’ were published but they were hardly the first of their kind. Using the word ‘avatar’ to describe people’s virtual selves was not an invention of Neal Stephenson but rather ‘Chip Morningstar’, an inhabitant of a VR environment called ‘Habitat’ that existed in 1985 (and, actually, the term originated from Hindu mythology. An ‘avatar’ is a temporary body a god inhabits while visiting Earth). If you include text-based virtual communities along with graphics-based environments, the first MUD was built by Roy Trubshaw on Essex university’s DEC 10 mainframe in 1979.

We find something similar when we consider the science fiction of the 1950s. While it certainly pre-empted the Apollo mission and even Sputnik, rocket technology like Werner Von Braun’s V2 missiles already existed. In 1953, the US Air Force had charted the curves and metacurves of speed, and those charts forecasted that playloads could be placed in orbit within four years. The space age did indeed begin in 1957 with the successful deployment of the first sattelite (although Sputnik was of course Russian, not American).  On the other hand, people’s intuition about when the space age would dawn was rather less reliable. In the foreword to ‘Sattelite’, written by E. Bergaust and W. Beller a few months before Sputnik was in orbit, the authors wrote, ‘public acceptance of Man’s coming exploration of space is slow. It is considered an event our children may experience, but certainly not one that we shall see’. 

It will be interesting to see if Ray Kurzweil’s charts for the growth of IT technology (by 2049 $1000 will buy enough raw computing power to match the raw computing power of every human brain on the planet, robots will have artificial intelligence at least equal to human intelligence and humans will be migrating into cyberspace by uploading their minds into functionally-equivilent software), or most people’s reaction to such speculation (‘by 2049? Oh, come on, Ray’) turn out to be more accurate.

It could be argued that the eventual arrival of SL was a self-fulfilling prophecy. People like Stephenson and Gibson took the crude MUDS that existed in their day, and used reasonable extrapolations of the growth in IT technology in order to imagine what such worlds might be like in the future. Once technology had actually begun to approach the levels of sophistication needed to realise these visions- at least to a limited extent- it was only a matter of time before some sci-fi-loving software whizzkid with ‘more money than God’ (to use Prokofy Neva’s estimate of Rosedale’s personal fortune) was going to organize a team and make it so.

Mind, thinkers were playing around with the idea of ‘jacking in’ to hedonistic fantasy worlds before the first MUD was built. In 1974, Robert Nozick wrote a book called ‘Anarchy, State, and Utopia’, in which he introduced the ‘Experience Machine’, a hypothetical device that could create totally convincing simulations of anything a person might desire. Admittedly, VR as it exists today is still a far cry from delivering immersion to that extent, but we do see some of Nozik’s arguments being played out in SL. Nozik reckoned that people would not choose the machine over real life, because they care about being in touch with reality more than they care about the pursuit of pleasure. Now, on one hand SL would seem to run contrary to that supposition. Tens of thousands of people per day do forsake their real life for the treats of VR. It is not too hard to imagine many more folk being enticed into SL if it ever did equal the immersion promised by the ‘Experience Machine’. But, on the other hand, SL residents generally get a bit uppity if they are told their second life is less real than their first. In particular, any person daring to suggest it is a game (a not entirely unreasonable point, given that it is clearly built using videogame technology), is liable to receive impassioned responses that ‘no, this is not a game, I run a serious business’ and blah, blah, blah. People do seem overly eager to convince others (and, I suspect, themselves) that you can be in SL while also being very much connected to Reality.

It might have been too much of a stretch to use Nozik’s ‘Experience Machine’ as a science fiction device that predicted the arrival of online worlds. The reason being that Nozik was not a science fiction writer and ‘Anarchy, State, and Utopia’ was a work of political philosophy – a seemingly different genre that is perhaps taken more seriously than sci-fi. Not only is science fiction deemed to be rather silly to some people, the very term is often used to pour scorn on other people’s ideas. For instance, the philosopher John Searle said of Kurzweil’s book, ‘The Age Of Spiritual Machines’, ‘it is important to emphasize that all of this is seriously intended. Kurzweil does not think he is writing a work of science fiction. He is making serious claims that he thinks are based on solid scientific results’. The implication there is that if Kurzweil HAD presented his vision in the form of a science-fiction story, he would therefore not have wanted anybody to take it seriously.

The refrain ‘this sounds like science fiction’ might be used as a means to inform people that speculations have wandered into absurdity, but what it really does is expose a flaw in Western language when it comes to thinking about the future. According to Eric Drexler, ‘the Japanese language seems to lack a disparaging word for “futurelike”. Ideas for future technologies may be termed “Shorai-Teki” (an expected development), “Mirai no” (a hope, or a goal) or “Uso no” (imaginary only)’. I am not entirely sure, but I think a technology is ‘Shorai-Teki’ if there are working prototypes in R&D labs and all that is required is some further refinements, or maybe more favourable market conditions, before the product is commercially viable. An example of ‘Shorai-Teki’ would be glasses that allow a person to see VR objects as if they exist in physical space. And anything that is deemed to be ‘Uso no’ runs contrary to the known laws of physics. For instance, Damon Knight’s story, ‘A for Anything’ featured ‘the Gizmo’, a rather neat gadget that could duplicate any object within its field, requiring no feedstock supply to do so. Since matter and energy cannot be created or destroyed (only transformed), a device that can make something out of nothing is clearly violating the laws of physics. 

That just leaves technologies that come under the category ‘Mirai-no’. I find this phrase to be a bit more ambiguous than the other two. A ‘goal’ could refer to technologies for which we have outlined the basic feasibility but have yet to overcome every technical hurdle required to build a working prototype. Desktop nanofactories, for instance, require advanced diamondoid mechanosynthesis, which is yet to be demonstrated. But it could equally refer to technologies much more speculative than that, by which I mean such things as warp drives or time machines, requiring capabilities so advanced we cannot even begin to seriously think about building them or do very much beyond wonder if it is ever going to be possible to do so. 

I think that highlights another problem with science fiction. It would seem to be a reasonable statement to say that a technology either exists or it does not. Nuclear power, spacecraft and robotic assembly lines exist and are therefore ‘science fact’. But space elevators, cold fusion and artificial general intelligence do not exist, and so they belong firmly in the world of ‘science fiction’. But then, once upon a time, any technology you care to mention did not physically exist and so was ‘science fiction’. So, it must surely be the case that Reality is NOT simply divided up into technology that exists, or does not. Rather, it is a place where technologies CANNOT exist; are bordering on the impossible; are just barely possible if we assume fantastic leaps in capability will be achieved in the future; are a long-term prospect; a medium term prospect; a short-term prospect and finally already working in R&D labs and people’s homes and workplaces. 

Furthermore, I would argue that this revision is still not truly representitive of reality, because there is actually a smooth continuum from what is practical to what is feasible to what is unfeasible and finally to the down-right impossible. Are paper clips impossible? To our ancestors, back before a time when metallurgy was discovered, yes. Is it possible to build an artificial brain that can house a consciousness? With our current understanding of the mechanisms of consciousness, no. The point I am making is that while it is easy to demonstrate that something is possible by actually making it, it is rather more difficult to use theoretical arguments to show something absolutely cannot be built (since there may be some hitherto unknown loophole in the laws of physics through which technology ‘X’ can sneak through from ‘Uso no’ to ‘Mirai-no’) or that it could be built. For instance, I suspect that artificial brains can indeed be made to work at least as well as human brains do, but I am prepared to accept there may be some reason why this is actually an impossible dream. 

‘Science fiction’: what a curious linking of polar opposites. After all, the former word is often equated with the pursuit of Truth. Others like Morgaine Dinova would say it is the pursuit of falsification that defines ‘science’ but either way we like to think of the scientist as a person for whom facts trump personal belief. ‘Fiction’, on the other hand, is concerned with the willfull construction of lies and fantasy. So what does that make science fiction? A true lie? The philosopher Alan Watts made an interesting point when he said, ‘to say that opposites are polar is to say much more than that they are far apart. It is to say they are extended and joined – that they are the terms, ends, or extremities of a single whole’. The artist Maurice Escher argued along similar lines when he wrote, ‘if one quantity cannot be compared to another, then no quantity exists. There is no “black” on its own or “white” either…we only assign a value to them by comparing them to one another’.

Thus, the purpose of science fiction is often not to project into our future, but to allow us to re-examine life as it is now through means of a well-worn philosophical tool. In philosophy one often encounters ‘thought experiments’ where we are asked to consider what it would mean if the parameters of the world were changed thus and so. The practicality of the experiment is not the issue, what matters is that the world is changed in some way, giving us a contrast – a “black” within a world of “white“ – with which we may more clearly see something we were previously unaware of.  We can re-examine our lives. The blockbuster science fiction movie ‘The Matrix’ could have been read as a tale about humanity versus technology, but on another level those robots and Agents represented rigid thinking and the Matrix was representitive of the institutionalised control that people generally allow themselves to be subjected to. And when the idea that people could be used as batteries to power Machine City was poo-pooed as technically impossible, the point was missed that this was an allegorical image, alluding to the fact that when we become passive consumerists, we give up our lifeforce to run society in many ways.

The basic question of science fiction is ‘what if?’. OK, well, what if matter and energy were somehow less fundamental than information? In the 3rd installment of our journey through speculative science, we’ll talk about the idea that the laws of physics derive from computation and how RL is able to go offline while seeming to run continuously. And, in the fourth and final installment, the question asked is: ‘What if we could gain absolute control over our world?’ I would imagine that residents of SL would feel they have absolute control over their online world if they had full access to the simulator software. Right, so what if you could gain access to the laws of physics; could change the very parameters of reality? Such a thought experiment was explored by Stanislaw Lem, in a fictional story called ‘The New Cosmogony’. Lem’s curious conclusion is that such extraordinary control does not equate to absolute freedom. Instead, he assumes that the universe is full of civilizations with that level of control and that the Minimax rules of Game Theory conspire to restrict their ability to change reality. 

Lem might have considered his to be a fictional tale, but others like Hans Moravec argue that such scenarios belong in the realm of science fact. Moravec is one of the world’s leading roboticists, described by Rodney Brooks (director of the AI lab at MIT) as ‘brilliant, innovative’. On the other hand, Brooks also described Moravec as ‘a true eccentric’ who, apparently, is ‘nuts’. Anyone who has read Moravec’s book ‘Robot: Mere Machine to Transcendent Mind’ will understand Brooks’s mixed feelings towards the roboticist. A work that epitomises Arthur C. Clarke’s Second Law of Technology, ‘the only way of discovering what’s possible is to venture a little way into the impossible’, ‘Robot’ begins with the birth of the AI movement in 1950 and then charts a progress of technology through every stage from ‘feasible’ to ‘unfeasible’ and then boldy on to claims that sound so outrageous only a genius or madman would dare speak them out loud. What we will find at the far-edges of possibility, so Moravec thinks, is the very Game Lem jokingly suggested built the universe we know…

But before that, we are going to discuss reality as information processing…

THREE: REAL LIFE’S ROLLING UPDATE.

Occasionally, existence in Second Life  is temporarily halted. I remember attending a party that was put on to promote a Scope Cleaver building project and about an hour after I arrived a message from the gods appeared, informing us that the region we were in was due to be restarted and that if we did not TP to another region within 2 minutes we would be booted out of SL altogether. Other times one attempts to log in, only to find the whole world is offline and will stay that way until vital maintainance is completed, or one discovers the current version of SL has been superceded by a new iteration that must be downloaded and installed before you can connect to the grid.

Now, you can say all kinds of negative things about RL. The TP function never works, you cannot rez prims out of thin air and you are assigned your avatar via a random genetic lottery and thereafter have minimal ability to alter its basic shape (and doing so requires a lot more effort than adjusting sliders). But you also have to give credit where credit is due. Slowdown and lag are not apparent, the draw distance is impressive and it never seems to require an update. What is more, RL never crashes and never needs to be taken offline in order to perform vital maintenance. It has run continually for at least 13.7 billion years. How does it achieve such impressive stability?

The answer seems pretty obvious. RL is not a simulation but an objective reality that exists in and of itself. But at a fundamental level what actually puts the ‘real’ into ‘Real Life?’ Let’s not worry about that for a moment and concentrate instead on explaining SL’s reality on a fundamental level. Perhaps a Genesis account for Linden Lab’s world would go something like this:

‘In the beginning there were the prims. And residents rezzed onto the barren land and saw that the emptiness was not good. And they commanded, “let the prims replicate!” and lo! The prims did multiply in number. And the residents took those prims and they resized them and they reshaped them and they combined them. And it became apparent unto the residents that the prims were now unrecognisable from their original form and so they gave the new forms new names, saying “We name you ‘house’, we name you ‘shoe’….’.

It’s a nice story but not one that is entirely accurate. Did the Prim come first or was it the Texture? Maybe it was the Script? Come to think of it, wasn’t the land, sea and sky already in place before a single resident touched down and started building? Really, though, it is pointless arguing over which of the above came first because they are all mere manifestations of a more fundamental reality. All land, all sea, all sky, every single avatar and everything seen or heard in SL are all ultimately patterns of binary digits. ‘In the beginning there was Zero and One’. But is it actually the case that SL’s existence can ultimately be defined in terms of lines of code? Those 0s and 1s are just representations of physical processes. Primarily, SL’s existence arises out of the electronic transformation of bits and the job of registering and transforming them is performed by logic circuits. Bits can be registered by any device that has two reliably distinguished states. In contemporary computers we use capacitors. A capacitor of zero voltage registers a 0 and a capacitor of non-zero voltage registers a 1. Hard drives use tiny magnets to register bits: A magnet whose north pole points down registers a 0 and a magnet whose north pole points up registers a 1. As well as bits, logic circuits need wires to move the bits from one location to another and logic gates to transform the bits one or two at a time. The latter are implemented using transistors, which can be thought of as little switches. Transistors are wired together in order to create 4 kinds of gate, ‘AND’, ‘OR’, ‘NOT’ and ‘COPY’. The names refer to the particular way each kind of gate takes one or more input bits and transforms them into one or more output bits. AND, OR, NOT and COPY can be wired together to make logic circuits, which are devices capable of performing more complicated transformations of input bits. Fundamentally, SL owes its existence to patterns of bits that themselves are created out of patterns formed by tiny switches, each one of which can be ‘on’ or ‘off’ at any given moment.

But now all we have done is shift the problem of explaining SL’s fundamental reality. Those computers did not just pre-exist. How did they come about? Modern computers began as a theoretical concept envisaged by Alan Turing. Turing imagined a deceptively simple device that he called the ‘Turing Machine’. Such a machine is capable of executing seven basic commands: ‘Read tape, move tape left, move tape right, write 0 on the tape, write 1 on the tape, jump to another command, halt’. What makes this machine deceptively simple is the fact that being able to execute those seven commands gives it a most remarkable property. A Turing Machine is a device that is flexible enough to read and correctly interpret a set of data that describe its own structure. Moreover, a single Turing Machine can mimic the operation of any other machine, no matter how complex. All it requires is the specification of any machine as a table of behaviour. Tracing the operation of that machine becomes a mechanical matter of looking up entries in the table.

Prior to Alan Turing’s conceptual design, machines were dedicated devices that each performed one task only. Today, though, we are quite familiar with a class of machine that has exactly the flexibility he imagined. Just think of all the ways we use our PCs. We can compose letters and write books using word processor packages. We can listen to music or watch videos on them. We can manipulate photographs. We can drive simulated cars, fold virtual proteins. Machines are no longer single-purpose devices and in many ways they are no longer just physical devices, either. For while your computer is ‘being’ a typewriter or a Formula 1 car or whatever, that particular ‘machine’ coexists with other machines that inhabit our hardware as dormant patterns of information; as software encoded on the electromagnetic ether that invisibly surrounds us. Fundamentally, everything computers can possibly ’be’ originate from that conceptual machine that Alan Turing described in 4 pages of dense notation.

But just as the digital information from which SL arises was a manifestation of a physical process, so too is the information of ideas. Human thought somehow arises from  patterns of brain cell activity. Brains are built by molecular machines following instructions encoded in the genome. The English language has 26 letters; the genome uses just 4: G, A, T, and C. Those letters are something physical. DNA is a molecule containing 4 different chemical bases: (G)uanine, (A)denine, (T)hymine and (C)ytosine. Chemicals are made out of atoms, which in turn are made out of protons, neutrons and electrons that operate according to physical laws that seem to be uncannily well described by mathematics.

At the absolute and ultimate basement level, then, knowing what SL fundamentally is and doing likewise for RL are not two separate issues. The same ultimate reality explains both. Such a revelation will come as no surprise to augmentists, given that they believe SL is not to be separated from RL. And of course one can make a pretty irrefutable case, arguing that RL and SL all belong in the same universe. Having said that, it is pretty hard to find an augmentist (or an immersionist, for that matter) ready to accept that their individuality is really a self-similar pattern infinitely repeating and forking throughout the multiverse. Perhaps the practice of living separate lives is a more accurate reflection of the multiplicity of self that emerges when one considers reality from a wider perspective than merely that which we see within the boundary of our light horizon.

In our attempt to define SL on a fundamental level, we attributed SL’s reality in terms of patterns of zeros and ones, in terms of neural and genetic information and in terms of quantum behaviour. We also tried to define it in terms of more ‘physical’ objects: A pattern of prims, a pattern of transistor activity, a pattern of brain activity, a pattern of particles. All of these are just matter/energy arranged in different ways, while the earlier list refers to different patterns of information. We can therefore reduce our list of things that RL and SL are ultimately made up of to matter/energy and information.

But how do they relate to each other and which is more fundamental? One possibility is that patterns of matter/energy are all that objective reality consists of, leaving information as a subjective tool our minds use for making sense of the world. It’s hard to take the notion of information having no objective reality seriously. If that were true, it would imply that DNA really carried no information until natural selection evolved brains complex enough to conceptualize the molecule as storing genetic data. Also, whenever historians talk about the likes of Newton, we are informed that he ‘discovered’ the law of gravity. Again, the implication is that such a law existed independently of a subjective need to impose order on the Universe.

Perhaps, then, information is more fundamental and what matter/energy can be and how it can behave, somehow arises from information processing? This possibility has conceptual difficulties, because we are used to information being embodied in a physical thing. Software exists in computers, genetic information in DNA and our memories are stored in the brain. So when we consider the possibility that reality arises from information processing, we naturally ask what kind of computer is performing the calculations. It could be that information is on equal footing with matter/energy and reality can no more exist without both being present  than fire can exist without fuel, heat and oxygen. Personally, this possibility is the one that makes most sense to me, but we’re going to explore the idea that information really is more fundamental than matter/energy.

Belief in the world as a virtual reality goes back a long way. Buddhism sees the physical world as an illusion, the allegorical tale of Plato’s cave suggested the reality we know is a mere shadow hinting at, rather than being, the world as it truly is, and Pythagoras thought Number was the essence from which the physical world is created. More recently, the likes of Ed Fredkin and Stephan Wolfram have argued that reality might arise from something like a simple computer program known as cellular automata (CA), which were invented by John von Neumann in the early 50s. You don’t necessarily need a computer to run a CA, a piece of paper will suffice. The most basic CA consists of a long line of squares, or ‘cells’, that are drawn across the page and can be either black or white. This first line represents one half of the initital conditions from which a CA ‘universe’ will evolve. A second line of cells is then drawn immediately above the first and whether a cell in this line is black or white depends on a rule applied to its nearest neighbours in the first line. The rules make up the other half of the initial condition.

What I have just described is an example of an algorithm, which is a fixed procedure for taking one body of information and turning it into another. In the case of a CA, the pattern of black and white cells on the first line represents the ‘input’ and the ‘output’ it produces is the pattern of cells in the next line. That in itself is not terribly exciting, but much more interesting things can happen if we run the CA as a recursive algorithm, one where the output is fed back in as input, which produces another output in the form of a third row of black and white cells, which then becomes the input for a fourth line and so on, in principle, for evermore.

Whether or not something interesting happens depends upon the rules used to govern the behaviour of each line of cells. In the case of the simplest possible CA, those that consist of a one-dimensional line of cells, two possible colors and rules based only on the two immediately adjacent cells, there are 256 possible rules. All cells use the same rule to determine future behaviour by reference to the past behaviour of neighbours and all cells obey the same rules simultaneously. Some initial conditions produce CAs of little interest. The result might be a repetitive pattern like the one associated with a chessboard, or completely random patterns. What makes these uninteresting is the fact that you can accurately predict what you will get if you carry on running the program (more of the same). These kind of CAs are known as ‘Class 1’. ‘Class 2’ CAs produce arbitrarily-spaced streaks that remain stable as the program is run. Class 3 produce recognizable features (geometric shapes, for example), appearing at random. Sometimes, they produce patterns known as ‘gliders’, shapes that appear to move along  a trajectory (what is actually happening is that the pattern is continually destroyed and rebuilt in an adjacent location). Using a computer to run a CA makes it a lot easier to watch gliders, because then the recursive algorithm is calculated millions of times faster and the illusion of movement is totally persuasive.

The most interesting CAs of all are class 4. These produce patterns of enormous complexity, novelty and surprise. What is most intriguing about them is the fact that their initial conditions seem no more complex than those which go on to produce the dull class 1 types of CA. To people like Wolfram, this is evidence that our attitudes towards complexity are not a true reflection of how reality works. ‘Whenever a phenomenon is encountered that seems complex it is taken almost for granted that it is the result of some underlying mechanism that is itself complex. A simple program that can produce great complexity makes it clear that this is in fact not correct’. We also see this phenomenon in fractals. Consider the famous ‘Mandlebrot Set’. How much storage space would be required to save a copy of every pattern it contains? The answer is, more storage space than you would have even if you used every particle in the visible universe to store a bit. That is because the Mandlebrot Set contains an infinite number of patterns and so it would exceed any finite storage capacity. And yet, underlying all that complexity there is a simple formula (Z=Z^2+C) that can be described in a few lines of code.

Faced with evidence that something complex could have a simple cause, Wolfram asked ‘when nature creates a rose or a galaxy or a human brain, is it merely applying simple rules – over and over again?’ Cellular Automata and fractals produce interesting patterns, but there can be more to their complexity than that. In 1985, Wolfram conjectured that a CA following ‘rule 110’ might be a universal Turing machine and therefore capable of carrying out any imaginable computation and simulating any machine. This conjecture was verified in 2000 by Mathew Cook. Then, in 2007, Alex Smith proved an even simpler CA (known as a 2,3 machine) was also capable of universal computation. 

Wolfram commented, ‘from our everyday experience with computers, this seems pretty surprising. After all, we’re used to computers whose CPUs have been carefully engineered, with millions of gates. It seems bizarre that we should achieve universal computation with a machine as simple as the 2,3 CA’. The lesson seems to be that computation is a simple and ubiquitous phenomenon and that it is possible to build up any level of complexity from a foundation of the simplest possible manipulations of information. Given that simple programs (defined as those that can be implemented in a computer language using just a few lines of code) have been proven to be universal computers and have been shown to exhibit properties such as thermodynamic behaviour and biological growth, you can begin to see why it might make sense to think of information as more fundamental than matter/energy.

Working independently of Wolfram, Ed Fredkin believes that the fabric of reality, the very stuff of which matter/energy is made, emerges from the information produced by a 3D CA whose logic units confine their activity to being ‘on’ or ‘off’ at each point in time. ‘I don’t believe that there are objects like electrons and photons and things which are themselves and nothing else. What I believe is that there’s an information process, and the bits, when they’re in certain configurations, behave like the thing we call the electron, or whatever’. The phenomenon of ‘gliders’ demonstrates the ability of a CA to organize itself into localized structures that appear to move through space. If, fundamentally, something like a CA is computing the fabric of reality, particles like electrons may simply be stubbornly persistant tangles of connections. Fredkin calls this the theory of ‘digital physics’, the core principle of which is the belief that the Universe ultimately consists of bits governed by a programming rule. The complexity we see around us results from recursive algorithms tirelessly taking information it has transformed and transforming it further. ‘What I’m saying is that at the most basic level of complexity an information process runs what we think of as the law of physics’.

Earlier, I mentioned that we’re so used to information being stored and processed on a physical system that when we encounter the hypothesis that matter/energy is made of information, our natural inclination is to ask what the information is made of. Fredkin insists that asking such a question demonstrates a misunderstanding of the very point of the digital physics philosophy, which is that the structure of the world depends upon pattern rather than substrate; a certain CONFIGURATION, rather than a certain KIND, of bits. Furthermore, it’s worth remembering that (according to digital physics), EVERYTHING depends entirely on the programming rules and initial input, including the ability of bodies of information as complex as people to formulate bodies of information as complex as metaphysical hypotheses. According to Fredkin, this makes it all but impossible for us to figure out what kind of computer we owe our existence to. The problem is further compounded by the proven fact that CAs can be a universal computer. Reporting on Fredkin’s philosophy, Robert Wright commented, ‘any universal computer can simulate another universal computer, and the simulated can, because it is universal, do the same thing. So it’s possible to conceive of a theoretically endless series of computers contained, like Russian dolls, in larger versions of themselves and yet oblivious to those containers’.

Because it adopts the position that our very thought processes are just one of the things to emerge from the calculations performed by the CA running the Universe, digital physics has ready explanations for the apparent contradiction between reality ‘as is’ (assuming digital physics is correct) and reality as it is perceived. When a CA is run on a computer or piece of paper, ‘space’ pre-exists. But Wolfram believes the Universe-generating program would be unnecessarily complex if space was built into it. Instead, he supposes the CA running our universe is so pared down that space is NOT fundamental, but rather just one more thing to emerge consequent to the program running. Space, as perceived by us, is an illusion created by the smooth transition of phenomena through a network of ‘nodes’, or discrete points that become connected as the CA runs. According to Wolfram, not only the matter we are aware of but also the space we live in can be created with a constantly updated network of nodes.

This implies that space is not really continuous. Why, then, does it seem that way to us? Part of the reason is that the nodes are so very tiny. Computers can build up photorealistic scenes from millions of tiny pixels and smooth shades from finely mottled textures. You might see an avatar walk from one point in space to another, but down at the pixel level nothing moves at all. Each point confines its activity to changing colour or turning on and off. The other reason is that, while in the case of an image on the monitor, it might be possible in principle to magnify your vision until the pixels become apparent, in the case of the space network it would be impossible to do likewise, because our very perception arises from it and so can never be more fine-grained than it is.

That line of reasoning also fixes the problem of ‘time’. Remember, that in a CA run on a computer, every cell is updated simultaneously. This would be impossible in the case of the CA running the Universe, because the speed of light imposes limits on how fast information can travel. Co-ordinated behaviour such as that observed in CAs require a built-in clock, but wherever the clock happens to be located, the signals it transmits are going to take a while to travel to cells that are located far way from it. One might think that having many clocks distributed throughout the network would solve the problem, but it would not because the speed of light would not allow a signal to travel between all the clocks to ensure they were synchronised.

Wolfram came up with a simple solution, which was to do away with the notion that every cell updates at the same time as every other. Instead, at each step, only one cell is updated. Time, just like space, is divided up into discrete ‘cubes’ and at any given moment it is in only one of these ‘cubes’ that time moves a step forward. At the next moment, time in that cube is frozen and another point in the space network is updated. Again, this reality ‘as is’ seems totally unlike reality as perceived. When was the last time you noticed all activity was frozen in place, save for one point in space here… and now here… and now here… that moved forward a fraction? No, in RL we have no lag, no waiting for our reality to update. Our RL never goes offline. But, of course, in the case of SL, the reason we notice when it has gone offline or is being updated is because it is (not yet) running the software of our consciousness. That, for now, is still largely confined within our brains. But when it comes to the CA running the Universe, your very awareness would be frozen – be offline – when any cell other than your own is being updated. Only when your own cell is updated are you in a position to notice the world about you, and when this happens, all that you see is that everything else has moved a fraction. In an absolute sense, each tick of the universal clock might be very slow and RL might actually suffer lag and periods when it is offline that are far longer than anything we endure in SL. But because perception itself proceeds in the same ticks, time seems continuous to us. Again, our perception can be no more fine-grained than the processes computing that perception.

Perhaps the reason why the speed of light is limited in the first place is because photons (just like all other particles) are comparable to gliders, which can only advance one cell per computation. In ‘Is the Universe a Virtual Reality?’, Brian Whitworth reasoned, ‘if both space and time arise from a fixed information-processing allocation, that the sum total of space and time processing adds up to the local processing available is reasonable… events in a VR world must have a maximum rate, limited by a finite processor’. The physical outcome of this supposition would be that something would impose a fixed maximum for the speed at which information could travel. And, of course, that is precisely what light does.

As well as this example, Whitworth cites several other ways in which the observed laws of nature coincide with the concept that ours is a virtual reality. It is perhaps incorrect to say that Whitworth considers this proof that reality IS a simulation, only that supposing it is does not contradict what we know about the laws of physics. Whitworth asks what the consequence would be if reality arose from finite information processing. If that were the case, we ought to expect algorithmic simplicity. ‘Calculations repeated at every point of a huge VR Universe must be simple and easily calculated’. And, as it happens, core mathematical laws that describe our world do seem remarkably simple. Whitworth points out that if everything derives from information, we should expect to find digitization when we closely examine the world around us. ‘All events/objects that arise from digital processing must have a minimum quantity’. Modern physics does seem to show that matter, energy, space and time come in quanta.

If you look at the letters in this body of text, each particular letter is identical to every one of its kind. This ‘a’ looks like that ‘a’, this ‘b’ is identical to that ‘b’ and so on. That is because of ‘digital equivalence’. Each letter arises from the same code so obviously they are identical. Similarly, if each photon, each electron and every other particle arises from the same underlying code, they too would be identical to each other. Again, this is what we observe. 

What other ways might reality seek to minimise waste in its information processing? In the virtual worlds that run on our computers, the world is typically not calculated all at once. Rather, the computer only renders the part of reality that the observer is looking at. If that were also true of RL – if reality is only calculated when an interaction ocurrs – then measuring reality ‘here’ would necessarily cause uncertainty with regards to what happens ‘there’. Or, as Whitworth put it, ‘if complementary objects use the same memory location, the object can appear as having either position or momentum, but not both’.

If the network running our VR was to become overloaded in certain regions, what would the result be? Well, SL residents know all too well what to expect if too many objects are rezzed or too many people gather in one sim. You get slowdown. Suppose that a high concentration of matter similarly constitutes a high processing demand. That being the case, wherever there is a high concentration of mass there ought to be a slowdown of the information processing of spacetime. This is in agreement with general relativity, which argues that time runs noticeably slower in the presence of strong gravitational fields caused by a high concentration of mass.

Summing up, Whitworth asked the reader, ‘given the big bang, what is simpler, that an objective universe was created out of nothing, or that a virtual reality was booted up? Given the speed of light is a universal maximum, what is simpler, that it depends on the properties of featureless space, or that it represents a maximum processing rate?… Modern physics increasingly suggests… that Occam’s razor now favours a virtual reality over objective reality’. 

Coming up in the final installment: ‘Rise of The Robots and the Jessie Sim Universe’.

Print Friendly, PDF & Email