SHADES OF GREY: An essay by Extropia DaSilva

SINGULARITY HAPPENS WHEN, EXACTLY?

This most definitely sounds like the stuff of wildest science fiction. We would surely have reached the Singularity by then. But remember that a lot of our current capabilities would seem pretty wild to our predecessors. When I say ‘predecessors’ I don’t mean ancestors far back in the mists of time, I mean within living memory of most of us. In the 1970s, chip designers took three years to design and manufacture integrated circuits with hundreds of components. Today, ICs are hundreds of times more complex, yet they take only nine months to go from concept to manufacture. Aided by CAD tools, a designer can click a change in a part and immediately see its effect on the rest of the design. Such a task would have taken a draughtsman several weeks. Moreover, the company Genobyte can ‘create complex adaptive circuits beyond human capacity to design or debug, including circuits exceeding best-known solutions to various design problems that has already been proven experimentally’.

‘Beyond human capacity to design’? That pretty much sounds like most people’s definition of ‘Singularity’, but products of genetic algorithms and other forms of automated design aren’t sending many people into the state of future shock brought about by the unfathomably complex. But then, at which point DO we start to see Singularity manifest itself in designs and products that are (in Arthur C. Clarke’s words) ‘indistinguishable from magic’? Bare in mind that we won’t just make one tremendous leap into a world where nanobots have massively upgraded our cognitive abilities and seamlessly woven our brains into the all-pervasive presence of the Omninet. IF that comes about, it will be as a result of many thousands of conservative goals reached by R+D labs working from the cumulative knowledge of our collective past experience. The Omninet, Semantic Web software agents, brain-machine interfaces, nanobot-based mind augmentation, all will be descended from a long line of technological descendants from previous rounds of modest goals, that step by step lead down to contemporary search engines, wi-fi hotspots, and neural prosthetics.

Eric Drexler remarked that the fact that electrical switches can turn one another on and off sounds pretty dull, but if you properly connect enough switches you have a computer. Once we had suitable computers (and other technology) we were ready to perform another experiment. On June 6th, 1969, the network control packets were sent from the data port of one IMP to another. Sounds rather ho-hum when you say it like that, but this was the first ever Internet connection. Once millions of computers were connected to the Internet, we had the dramatic consequence of the Web and all its applications. (Ok, they didn’t just spontaneously appear as a result of a critical number of connections being established. My point is that most people would not have realised the potential behind that first demonstration). Currently, as we have seen, businesses etc. are taking steps to add an additional ‘machine readable’ layer to web-pages, the Crowd are tagging sites with metadata and lots of other mundane stuff is going on from which a global brain, a planetary super organism, the EarthWeb collective, may emerge.

Is that The Singularity? It’s certainly one of the developments cited by Vinge as a reason for taking the idea seriously. ‘Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity’. Most discussions about Singularity, though, focus on his first (of four) possible causes: ‘There may be developed computers that are “awake” and superhumanly intelligent’. The classic definition of general artificial intelligence, in other words.

I suppose this scenario attracts the most attention, because it implies a situation in which The Singularity is easily identified as having happened. After all, if you have some machine intelligence sitting on your desk, offering solutions to problems and proposing new theories not yet reached by any human, you might well conclude that you are in the presence of a super intelligence. One slight problem with this scenario is the way it implicitly assumes the mind in the machine — while superior in its ability to perform mental tasks — is not altogether different to a human intelligence. But the whole notion of Singularity is founded on the thought experiment in which a mind capable of understanding the principle operations governing its intelligence, modifies and enhances its cognitive architecture, and then uses the resulting increase in ‘the smarts’ to further modify and enhance itself, getting better and better at upgrading itself as each upgrade makes it smarter and smarter. The idea that this intelligence will continue to entertain/educate humans with impressive displays of mental dexterity is one I find hard to fathom. If you think about it, as it approaches Singularity, a kind of cognitive event horizon, a mental blindspot, should manifest itself as the mind on the other side recursively self-enhances itself into an uncomprehending silence.

If life autoevolves to become information so efficiently encoded it looks for all the world like sunlight and dirt, that might explain why our attempts to find signs of cosmic intelligence picks up nothing but sunlight and dirt. As Stephen Witham said (with a nod towards Arthur C. Clarke) ‘any sufficiently advanced communication is indistinguishable from noise’.

I’m not, by the way, saying that the idea of a super-smart AI solving all longstanding questions for us is flat-out absurd. After all, maybe intractable problems such as the meaning of life, or what is consciousness, are as intuitively obvious to a smart enough intelligence as… well, as obvious as the fact that the dog’s scarily well-matched adversary is actually itself reflected in a mirror is to us. The problem then, of course, is how you could possibly explain the notion of Self to a dog, or the answer to life, the universe, and everything to a human, in a way that is not completely nonsensical to them. “What do you mean… 42’? Ok, I’m being a bit unfair by supposing we seek answers only to deep philosophical questions. Equally, the Mind might master the intricacies of the ageing process and manufacture an elixir of youth. It might absorb all knowledge regarding molecular manufacturing, quickly work out a practical path to get from where we are now to fully-working systems, and have them in our homes by the next day. What IS implausible, is the idea that stories about living in a world of eternal youth with every materialist whim instantly available are good approximations of what it’s like beyond the veil of the ultimate event horizon. No it is not. It’s nothing more than infantile fantasies of omnipotence.

Of course, sufficiently advanced genetics, robotics, information-technology and nanotechnology might provide us with means to break through the event-horizon. The other scenarios imagined by Vinge describe ways in which humans might increase their cognitive abilities until Singularity is achieved. As well as ‘large computer networks (and their associated users) may “wake up” as superhumanly intelligent entity’, there is ‘computer/human interfaces may become so intimate that users may reasonably considered superhumanly intelligent’, and ‘biological science may the means to improve natural intelligence’.

The scenario in which the Internet is nudged past a critical point of complexity and organization and becomes effectively a brain with some kind of meta-consciousness unfortunately introduces a significantly tough mental problem. It’s not so much the question of how on Earth we go about “waking up” the Internet, but how we could possibly know when such a thing has happened. This is a completely different proposition to the quest to build humanlike intelligence in robots. In that case, we can make a comparison between the AI and our deep and intuitive understanding of human behaviour. I don’t mean we can formulate what consciousness is, just that evolution adapted us to be naturally adept at recognising human traits. People can quickly spot ‘something odd’ about robot imitations, even if they cannot express just what it is that gives it away. But what kind of awareness would a global brain have? It seems extraordinarily unlikely that it would take on a form comparable to consciousness as I experience it (and as I assume other humans do). Since we have no idea what it looks like, does it not stand to reason that the phase-transition from mere network to meta-consciousness could happen without our realising it? And if that is the case, how can we be certain that the Web has not already ‘woken up’?

We can’t, plain and simple. But Vinge’s scenario assumes it is the network PLUS ITS USERS that collectively form the superintelligent entity. This sounds analogous to the emergent intelligence of social insects, or our own brains, in the sense that it involves many, many simple processes happening simultaneously, with interactions among the processes that create something substantially more complex than a reductionist study of the parts would reveal. But while it’s quite possible for a person to observe the overall activity of an ant’s nest, it’s extremely hard to imagine how one might go about studying the overall organization of a meta-consciousness distributed throughout a ‘brain’ the size of a planet. We would, after all, be down at the level of the ant or the neuron.

But even so, from my lowly perspective I can think of a good reason not to declare that the Web has achieved an intelligence worthy of the term ‘Singularity’. It treats knowledge and nonsense as equals, which seems far-removed from Nova Spivack’s description of ‘Earthweb’ as ‘a truthful organism. The more important a question is, the more forcefully it will search for the truth and the more brutally it will search for falsehoods’. Not so, today’s Web! Yes, one can easily access a wealth of information that significantly furthers understanding of any subject, but equally one could read worthless pseudo-science, corporate/political spin doctoring and junk conspiracy theories that, at best, leave you uninformed and, at worst, make you dangerously deluded.

But, hang on a second. We need to remember that if the Internet had organized itself into a Brain, we would be down at the very lowest levels of knowledge processing. Elizer Yudkowsky came up with a startling analogy that, while not actually conveying the subjective experience of being post-human, does convey some idea of the gap between such an intelligence and our own. ‘The whole of human knowledge becomes perceivable in a single flash of experience, in the same way that we now perceive an entire picture at once’. But this statement leaves out an intriguing fact discovered by neuroscience — that the brain knows about things the mind is unaware of. Needless to say, ‘knows’ is quite the wrong terminology. More precisely, functional brain scans show certain areas of the brain responding to relevant imagery. For instance, looking at an image of an angry face activates the Amygdala, a small part of the brain concerned with detecting threatening situations. Looking at happy or neutral expressions causes no activity in this part of the brain. Experiments have shown that if an image of an angry face is shown and then followed by an image of a happy or neutral face, so long as the interval between the two is less than about 40 msec the subject is completely unaware that the angry face was ever shown. Their amygdala, though, DOES respond to the image. So this, and many other experiments, prove that the brain is ‘aware’ of things the mind does not know about. Many social situations depend upon this ability. If you are at a party there is no doubt a great hubbub of conversations, music and general ambient noise. At the lower levels of audio processing the brain is responding to every sound, but your mind is able to filter out everything expect the speaker you are paying attention to.

If humans could not unconsciously filter out the environmental information gathered by the senses, life would be overwhelmingly complicated. You could argue, then, that knowledge is defined as ‘the intelligent destruction of information’. For animals, that means effectively and intuitively discerning between important and irrelevant sense patterns. We humans, though, have the additional headache of filtering our cultural, philosophical, scientific and theological conceptual frameworks. We are somewhat less capable of distinguishing between ‘junk knowledge’ and the true path to enlightenment. But what happens if we as individuals with PCs, or better still, wearable or implanted devices, become the functional neurons in a global brain? What if groups of connected humans become the various regions of such a brain? That being the case, we as the lowest levels of knowledge-processing would be ‘aware’ of a great amount of ideas, beliefs and concepts that the meta-consciousness does not need to know. Science and pseudo-science, truth and falsehood, accuracy and inaccuracy, all exist in the processes happening in the lower-levels of meta-consciousness that are intuitively filtered out by the higher levels until there is only an enlightened understanding of knowledge in the Mind.

But, hey, whoever said we must be content with our lot, and remain down at the lowest levels of knowledge-processing? To claim such a thing would run contrary to everything I said about how we are building a Web more capable of handling intercreativity. How, though, do we determine the moment when we have reached The Singularity? It’s important to remember that the term is not meant to imply that some variable blows up to infinity. Rather, it’s based on the observation that what separates the human species from the rest of the animal kingdom is the ability to internalize the world and run ‘what ifs’ in our heads. Knowledge and technology allow us to externalise and extend this capability. As Vinge said, ‘by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals’.

‘Are entering’? Why not ‘have entered’? Why is our current technology, which carries more communications per-second than human language has conveyed in 100,000 years, that has created largely autonomous processes that churn out designs beyond expert understanding, seen merely as a point on the road TO radical change, rather than the final destination? Probably because we can dimly glimpse a new generation of computers, robots, search software… so many improvements in so many areas, hinting that the REAL radical changes have yet to occur. But our growing technological infrastructure may be countered by the human adaptability to change, preventing us from appreciating how far we have come, and so leading to miscalculations when we try to determine how far we have left to go. ‘Computers still crash’, the naysayers are fond of pointing out, implying that while there have been undeniable advances in computer hardware, no such progress is apparent in software. But such a view fails to recognise that we task contemporary software with challenges that would have choked the supercomputers of former years. Pixar’s technology took many, many hours to render the frames for the CG movie ‘Toy Story’. Their later movies were crafted using new generations of hardware and software tools, but the time required to render each frame was never reduced by any significant degree. Is that because each generation failed to improve on its predecessors? Of course not! Pixar raised the bar, upped the ante, pushed their tools to the very edge of technical feasibility. Yesterday’s ‘impossible’ is today’s cutting-edge. Yesterday’s ‘incredibly difficult’ is today’s ‘moderately easy’; yesterday’s ‘moderately easy’ is now so intuitive that we are fooled into thinking the ‘little steps’ of progress taken today cover the same ground as the steps previous generations of technology achieved.

And what about this idea that ‘computer/ human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent’. Perhaps so, but when? What degree of cyborgization is necessary to achieve ‘Singularity’? Sure, you can invent a phrase like ‘Brain 2.0’, but that implies a boundary more clearly defined than the reality would turn out to be. For why would a person upgraded to ‘Brain 1.99’ not be possessed with cognitive abilities that appear to us with our version 1.00 brains as having ‘entered a regime radically different from our human past’? You could ask the same thing of a person whose brain is 1.80… 1.70… But obviously CONTEMPORARY brain implants (we might label these 1.01) do not give a person superhuman ability, nor do the brain implants existing in R+D labs right now, version 1.02.

But, you know what? To the people upgraded to brain 1.99 the next step towards 2.00 will seem just as conservative and sensible as the progress from today’s generation of implants to the next seems to us. In fact, you can bet that they will debate whether or not such a step is WORTHY of the term 2.0, just like today you hear some people argue that ‘Web 2.0’ is REALLY ‘1.whatever’. Perhaps the greatest fallacy of all is to treat Singularities as if they are a physical object that occupies a definite point in space and time. They do not. For while black holes may physically exist and occupy a definite location, the ‘singularity’ at their centre occupies no space except the gap created by incomplete knowledge. Once we have a working model of quantum gravity, the ‘singularity’ inside black holes and at the birth of the universe will vanish in a flash of mathematical clarity. As for the ‘Technological Singularity’, any sufficient increase in smartness brings into focus new questions that could not have been formulated before. As long as there are unanswered mysteries inhabiting the minds of curious entities, the question of whether or not the Singularity is Near will be asked and the best answer will continue to be ‘we do not know’.

Meanwhile, back in the here and now, we see some SL bloggers take up their position at one extreme or the other of the two philosophical systems. But it is those shades of grey smoothly blending the two philosophies in myriad ways that perhaps deserves closest scrutiny. Technologies that will allow a much more seamless blend between the Internet and our natural environments are reaching maturity, and the consequence of their integration in society will be a greatly diminished ability to distinguish between the core principles of augmentation and immersionism. The former group can look forward to new software tools and hardware that allow communication hitherto impossible outside of science fiction, while immersionists can rest assured that such technologies will not work half as well as they might unless the metaverse is populated by real digital people and mind children.

But that’s another essay.