Question. What connects Alan Watts, Richard Dawkins and Henrik Bennetsen? The answer is, they have all written about the human need to make distinctions and separate things into classes. In ‘The Two Hands Of God’, Watts wrote, ‘from the standpoint of thought, the all-important question is ever, “is it this, or that?”. By answering such questions, we describe and explain the world’. Richard Dawkins pointed out that ‘many of our legal and ethical principles depend on the separation between homo sapiens and all other species’.
And Henrik Bennetsen? He wrote about the two philosophical systems that neatly divide the residents of Second Life ®. You are either an Immersionist, or an Augmentationist.
But look more closely at what these people wrote. While all three identified various ways in which we draw distinctions, they also argued that reality is often not like that. Alan Watts cautioned, ‘in nature herself there are no classes… The importance of a box for thought is that the inside is different to the outside. But in nature the walls of a box are what the inside and the outside have in common’. Richard Dawkins, meanwhile, explained how we can only talk about ‘species’ because so many forms of life have gone extinct and fossil records are so incomplete. ‘People and chimpanzees are linked via a continuous chain of intermediates and a shared ancestor… In a world of perfect and complete information, fossil information as well as recent, discrete names for animals would be impossible’. And while Bennetsen did give his essay the title ‘Augmentation versus Immersion’ and various other bloggers have referenced it when writing about clashes between incompatible beliefs in SL, it seems to have been forgotten that he wrote, ‘I view these two philosophies placed at opposite ends of a scale. Black and white, if you will, with plenty of grey scales in between’.
I think this remark applies to many distinctions, such as ‘natural’/’artificial’; ‘actual’/’virtual’ and ‘person’/’machine’. These distinctions, arguably, are no more grounded in reality than the separation of life forms into species. Furthermore, while the illusion that humans are a distinct species separate from all other animals was brought about by past events (those events being extinctions and the destruction of fossils via geological activity), one can dimly glimpse how current research and development in Genetics, Robotics, Information technology and Nanotechnology might result in a future where it no longer makes sense to distinguish between the natural and the artificial; the actual and the virtual. The consequence of this will go much further than making all those essays about ‘immersionism versus augmentationism’ seem nonsensical to future generations. It also suggests that a technological singularity could happen without anybody noticing.
To understand the reasoning behind both of those suggestions, we need to take a wider view than just the ongoing creation of Second Life. It is, after all, a virtual world existing within a much larger technological system, namely the Web. As we progress through the 21st Century, what is the Web becoming?
THE GOSPEL ACCORDING TO GIBSON.
Transhumanists and related groups tend to imagine that the arrival of the Singularity will be unmistakable, pretty much the Book of Revelation rewritten for people who trust in the word of William Gibson, rather than St. John the Divine. Is this necessarily the case? I would argue that, if the Singularity arrives on the back of ‘Internet AI’, the transition to a post-human reality could be so subtle, most people won’t notice.
The transition from Internet to Omninet (or global brain, or Earthweb, or Metaverse, choose your favourite buzzword) has at least three trends that might conspire to push technology past the Singularity without we humans noticing. The first trend, networking embedded computers using extreme-bandwidth telecommunications, will make the technological infrastructure underlying the Singularity invisible, thanks to its utter ubiquity. Generally speaking, we only notice technology when it fails us, and it seems to me that, before we can realistically entertain thoughts of Godlike AI, we would first have to establish vast ecologies of ‘narrow’ AIs that manage the technological infrastructure with silent efficiency.
The second trend is the growing collaboration between the ’human-computing layer’ of people using the Web, and the search software, knowledge databases etc. that are allowing us to share insights, work with increasingly large and diverse amounts of information, and are bringing together hitherto unrelated interests. Vinge noted that ‘every time our ability to access information and communicate it to others is improved, in some sense we have achieved an increase over natural intelligence’. The question this observation provokes is ‘can we really pinpoint the moment when our augmented ability to access information and collaborate on ideas is producing knowledge/technology that belongs in the post-human category’? Finally, if the Internet is really due to become a free-thinking entity, loosely analogous to the ‘organism’ of the ant colony, would we be any more likely to be aware of its deep thoughts than an ant appreciates the decentralized and emergent intelligence of its society?
Looking at the first trend, there’s little doubt that we are rapidly approaching an era where the scale of information technology grows beyond a human’s capacity to comprehend. The computers that make up the US TeraGrid have 20 trillion ops of tightly integrated supercomputer power, a storage capacity of 1,000 trillion bytes of data, all connected to a network that transmits 40 billion bits/sec. What’s more, it’s designed to grow into a system with a thousand times as much power. This would need the prefix ‘zetta’ which means ‘one thousand billion billion’, a number too large to imagine. Then there is the prospect of extreme-bandwidth communication. ‘Wavelength Division Multiplexing’ allows the bandwidth of optical fibre to be divided into many separate colours (wavelengths, in other words), so that a single fibre carries around 96 lasers, each with a capacity of 40 billion bits/sec. It’s also possible to design cables that pack in around 600 strands of optical fibre, for a total of more than a thousand trillion bits per second. Again, this is an awesome amount of information that is being transmitted.
These two examples represent two of the four overlapping revolutions that are occurring, thanks to the evolution of IT. The first of these, the growth of dumb computing, is referred to as James Martin as ‘the overthrow of matter because it stores such a vast number of bits and logic in such a small space’. It was not so long ago that futurists were making wild claims about a future web with 15 terabytes of content. That’s not so impressive compared to Google’s current total database, measured in hundreds of petabytes, which itself now amounts to less than one data centre row.
The second revolution is the ‘overthrow of distance’, a result of fibre-optic networking and wireless communication. These revolutions will ultimately converge on a ‘global computer’ that embraces devices spanning scales from the largest to the smallest. Data centres sprawling across acres of land, acting as huge centralized computers comprised of tens of thousands of servers. Optical networks will transport their data over vast distances without degradation. Today, many of the duties once delegated to the CPU in your PC can now be performed on web-based applications. Current research, inscribing lasers on tops of chips, and the aforementioned all-optical networks, will radically decentralize our computing environment, as the Omninet embraces handheld communicators and receives data from ubiquitous sensors no larger than specks of dust. As George Gilder put it, ‘(the Omninet) will link to trillions of sensors around the globe, giving it constant knowledge of the physical state of the world’.
The human species has two abilities that I marvel at. The first is that, collectively, we are able to bring such radical technology out of vapourware, into R+D labs, and eventually weave it into the fabric of everyday life. The second is that, as individuals, we become accustomed to such technology, to the extent that it becomes almost completely unremarkable, as natural as the air we breathe. This latter trait may play a part in ensuring the Singularity happens without us noticing. It’s commonly believed that its coming will be heralded by a cornucopia of wild technology entering our lives, and yet today technologies beyond the imagination of our predecessors are commonplace. It can make for amusing reading to look back on the scepticism levelled at technologies we take for granted. A legal document from 1913 had this to say about predictions made by Lee De Forest, designer of the triode, a vacuum tube that made radio possible: ‘De Forest has said… that it would be possible to transmit the human voice across the Atlantic before many years. Based on these absurd and deliberately misleading statements, the misguided public… has been persuaded to purchase stock in this company’.
To get an idea of just how much attitudes have changed, consider the research that shows users of search engines are satisfied with results delivered within a twentieth of a second. We grow impatient if we’re made to wait much longer. In 1913, the belief that the human voice could be transmitted across vast distances was laughed off as ‘absurd’. In 2007, we have what amounts to a computer-on-a-planet, allowing not only global voice communication but near instantaneous access to pretty much all knowledge, decentralized communities sharing text, images, music and video and even entire online worlds where you can explore every possible facet of self. Our modern society is clearly filled with technology beyond the ‘absurdity’ of trans-atlantic voice communication, so why are we not in a profound state of future shock?
Well, recall the difference between ‘visible’ and ‘invisible’ innovations. Radio waves transmitting voice across the ocean almost instantaneously, actually TALKING to someone on the other side of the world as if they were IN THE SAME ROOM was truly unprecedented. On the other hand, chatting on a mobile phone or online via Skype are just variations on established innovations. In the future, we may have homes fitted with white light LEDs, replacing incandescent light bulbs. This would provide low energy light, and unlike existing light sources it could be readily adapted for optical wireless broadband internet access. Again, I could cite the advantages that this would have over current wi-fi and other radio wave-based wireless. I could also play devil’s advocate and cite all the technical challenges that must be overcome before it is practical. But how much of this will be noticed by the user when they connect wirelessly to the web, as many of us do now? There is nothing here that is startlingly innovative, not any more. It’s now utterly unremarkable that we can flood our homes with light at the flick of a switch, that we have electricity whenever we need it, that the airwaves are filled with radio, TV and telecommunication. It’s all so thoroughly woven into the fabric of our society that it is invisible. We only really appreciate how much we depend upon it on those rare occasions when we hit the ‘on’ button and, thanks to an equipment or power failure, nothing happens.
Computers, though, are still not quite as ‘invisible’ as the TV set is, and that’s because they are not yet ‘plug and play’. I think most people switch on the computer, half expecting it to not boot, fail to connect to the Internet, drop their work down a metaphorical black hole and so on. But it’s certainly the case that modern PCs are vastly easier to use than those behemoth ‘mini’ computers of decades ago, despite the fact that, technically speaking, they pack in orders-of-magnitude more power and complexity. Miniaturization and ease-of-use are both factors in the personalization of computing, and technophiles have plenty of metaphors to describe the natural end-point. Wired’s George Johnson wrote, ‘today’s metaphor is the network… (it) will fill out into a fabric and then… into an all pervasive presence’. Randy Katz explained, ‘the sea is a particularly poignant metaphor, as it interconnects much of the world. We envision fluid information systems that are everywhere and always there’.
In other words, a time when the Internet becomes the Omninet, cyberspace merges with real physical space and simply… vanishes, having become so completely woven into the fabric of society and individual lives we forget it is there. Most people, I think, believe that there is the natural world, consisting of all that is biological, and then there is the artificial world, to which belong products of technology. These two worlds are distinct… or at least they are until you give it some thought. When we use snares, or nets, or bolas, we consider these to be tools and therefore products of the artificial world of technology. But when spiders use their silk to construct snares, or to use like a gladiator uses a net, or something so like a bolas that this particular arachnid is known as a ‘bolas spider’, in which category do these functional items belong? I suppose a difference between spiders’ various webs and our analogous tools is that the silk is produced by the spider itself, and so could be considered to be just as much a part of its body as its legs or eyes. But other animals make use of discarded items they stumble across, like hermit crabs which crawl into discarded shells. This is simply re-using the shell’s original ‘purpose’, of course, but beavers fell trees to use as raw building materials for their dams and lodge. When we build dams or erect skyscrapers, these feats of engineering seem incongruous in a way a beavers’ dam or termite mound is not. Yet, in what sense are these not engineering/architectural projects as well?
The fact is that we cannot separate the world so neatly into ‘natural’ things over here and artificial things over there, because there exists a smooth continuum of examples blending one with the other. From the myriad phenotypes that grow from a single cell, to animals like spiders that manufacture an extended phenotype from bodily materials, to animals like hermit crabs that hunt down a single useful discarded item, to beavers etc. that gather much material and put it to a purpose it was not evolved to serve, the ‘artificial’ and the ‘natural’ are closely related.
If there is a difference between the examples listed in the preceding paragraph, and our technology, it is that these tools developed no faster than natural selection. On the other hand, human tools have developed at a pace that, in comparison to evolution, lead to our modern society in an eye blink. In just a couple of million years, we went from seeking caves in which to shelter (like a hermit crab seeking discarded shells) to emulating termites by building homes out of hardened mud, to our modern cities with their towering skyscrapers of glass and steel. This rapid progress has seen our technology grow into something that appears increasingly alienated from the natural environment. Deserts, forests and oceans are natural environments. Farmland seems very natural too, but it is actually engineered by us to grow crops whose genes were selected by our guiding hand, as opposed to natural selection. Few people feel they are in a natural environment when in a city like New York. Straight lines and right angles dominate, both of which seem abhorrent to nature. At night, when celestial mechanics dictate all should be dark, our urban environments blaze with light and buzz with activity.
From the hunter-gatherer society to the information age, the trend has been for our rapidly-growing technologies to become increasingly distinct from the natural world. So when we anticipate a future Singularity, we assume that the super-duper technological growth that marks its arrival will ensure it is as unmissable as the Hoover dam. This vision of an ‘omninet’, though, dictates that the trend is now reversing, at least where information tech is concerned. Vinge predicts embedded networks spreading ‘beneath the net, supporting it as much as plankton supports the ocean ecology’, will grow networks so ubiquitous that they comprise a sort of cyberspace Gaia merged with the biosphere. For users of this network, ease-of-operation has moved beyond the point where connecting to the Web and accessing its functions is as effortless as getting water to flow from the tap. This is web surfing as intuitive as breathing, as natural as the experience of hearing these words spoken in your mind as you read. Terms like logging off and logging on cease to have meaning because now the net is omnipresent (hence, ‘Omninet’). Sci-fi visions of becoming immersed in cyberspace imagined this would ocurr via us ‘jacking in’ by plugging a cable into our brains. Cyberspace might indeed enter our brains, albeit via a network of nanoscale transponders communicating with neurons and each other on a local area wireless network. But, ultimately, if this idea of an Omninet is valid, immersion will happen because the Internet spreads out into ubiquitous sensors that pervade the environment.
THE SEMANTIC WEB.
The sheer quantity of data and diversity of knowledge that will exist in this age would overwhelm us, absolutely requiring advanced machine intelligence to help organize and make sense of it. Right now, the Internet is a valuable source of information, but it has a weakness in that documents written in HTML are designed to be understood by people rather than machines. This is unlike the spreadsheets, word processors and other applications that are stored on your computer, which have an underlying machine-readable data. The job of viewing, searching and combining the information contained in address books, spreadsheets and calendars is made relatively simple, thanks to a division of labour between the goal-setting, pattern-recognition and decision-making supplied by humans and the storage and retrieval, correlation and calculation and data presentation handled by the computer.
It’s currently much more difficult to effectively search, manipulate and combine data on the Internet, because that additional layer — data that can be understood by machines — is missing. The ‘Semantic Web’ is an ongoing effort to resolve this deficiency. At its heart lies ‘RDF’ or ‘Resource Description Framework’. If HTML is a mark-up language for text, making the Web something like a huge book, RDF is a mark-up language for data, and the Semantic Web is a huge database comprised of interconnected terms and information that can be automatically followed.
Once there’s a common language that allows computers to represent and share data, they will be in a better position to understand new concepts as people do — by relating it to things they already know. They will also be more capable of understanding that when one website uses the term ‘heart attack’ and another uses the term ‘myocardial infarction’, they are talking the same thing. They would, after all, have the same semantic tag. If you wanted to determine how well a project is going, the Semantic Web would make it easier to map the dependencies and relationships among people, meeting minutes, research and other material. If you went to a weather site, you could pull off that data and drop it into a spreadsheet. The structure of the knowledge we have about any content on the web will become understandable to computers, thanks to the Semantic Web’s inference layer that allows machines to link definitions. When thousands of concepts, terms, phrases and so on are linked together, we’ll be in a better position to obtain meaningful and relevant results, and to facilitate automated information gathering and research. As Tim Berners-Lee predicted, ‘the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines, leaving humans to provide the inspiration and intuition’.
The Semantic Web will bring us closer to Berners-Lee’s original vision for the Internet. He saw it not only as a tool for communication, but also as a creative, collaborative playspace where people would present their finished idea, but also leave a trail for others to see why things had been done a certain way. Science, in particular, would be greatly facilitated by the Semantic Web, since it would provide unprecedented access to each field’s dataset and the ability to perform an analysis on them. Researchers in disciplines like artificial intelligence, whose ultimate goals require the collaborative efforts of many scientific groups, will find the ability to perform very powerful searches across hundreds of sites and the ability to bridge barriers created by technical jargon immensely useful.
Speaking of AI, the vast storage capacity and wealth of information that will exist in an era of ubiquitous web access will make machine intelligence a real necessity, but at the same time having sensors everywhere and a wealth of information will make the task of building smarter machines easier. This is because an AI system that has to make a recommendation based on a few datapoints is bound to work less well than one with access to a lot of information about its users. With many sensors in the environment forming a network, it may be possible for computer intelligence to obtain necessary information without having to rely on complicated perception. A simple example would be a floor covered with pressure-sensitive sensors that track your footsteps, letting a robot know where you are and where you are headed. Also, contemporary experiments in which volunteers have had their daily lives closely monitored via wearable devices have revealed that up to 90 percent of what most people do in any day follows routines so complete, it requires just a few mathematical equations to predict their behaviour. Psychologist John Bargh explained, ‘most of a person’s everyday life is determined not by their conscious intentions and deliberate choices but by mental processes put into motion by the environment’. If that environment were full of sensors feeding information to software designed to learn from them, the Omninet would grow increasingly capable of anticipating our needs and presenting information at the exact moment it’s needed and in the correct context.
Again, this will require the inclusion of machine-readable data and networking embedded computers may increase the likelihood that this will be attended to with little human effort. A photograph’s location could be assigned using geographic tracking via GPS. Moreover, now that we have people uploading countless snapshots to the Web, there’s a vast amount of material with which to train object-recognition software. According to New Scientist, ‘robots and computer programs are starting to take advantage of the wealth of images posted online to find out about everyday objects. When presented with a new word, instead of using the limited index it has been programmed with, this new breed of automaton goes online and enters the word into Google, (using) the resulting range of images to recognise the object in the real world’. Eventually, each document type that we use to record and organize our daily lives will be tagged with data that allows the Omninet to identify the nature of each by analysing its form and content. And by linking this metadata, people will be able to increasingly rely on automated image recognition, natural language processing and full text/speech searches to hunt down particular websites, emails or to recollect barely remembered events with a few sparse phrases, sounds or images.
At this point, it might be worth remembering that significant outcomes can be a result of mundane causes. The transformation of the Internet into something like an omnipresent oracle is not the explicit goal of most R&D today. Semantic Web tools are mostly used for the more conservative purpose of coding and connecting companies’ data so that it becomes useful across the organization. Much of this re-organization is invisible to the consumer; perhaps the only outward sign is an increase in the efficiency with which financial data can be sorted, or the way improved, automated databases make shopping online less of a headache. The relationship is two-way, of course. Companies benefit from the metadata they obtain about their users, just as the user benefits from the increased efficiency brought about by the companies’ semantic tools. IBM offers a service that finds discussions of a client’s products on message boards and blogs, drawing conclusions about trends. Again, nothing startling here, just ways to improve market research. But these mundane tools and the conservative steps they enable are laying down the foundation upon which the next generation of Semantic Web applications will be built, and so it continues step by cumulative step. From the perspective of each consecutive step, the next immediate goal invariably seems just as mundane as ever, but as many thousands of these steps are taken, we progress smoothly toward tremendous technological advances. As disparate data sources get tied together with RDF and developers order terms according to their conceptual relationships to one another, the Web will be transformed from a great pile of documents that might hold an answer, to something more like a system for answering questions. Probably, the dream of a web populated by truly intelligent software agents automatically doing our bidding will be realised after we have established a global system for making data accessible through queries. EarthWeb co-founder, Nova Spivack, sees the progression toward a global brain occurring in these two steps: ‘First comes the world wide database… with no AI involved. Step two is the intelligent web, enabling software to process information more intelligently’.
THE HUMAN COMPUTING LAYER.
Organizing data, metadata and links between data nodes is obviously not the only way of leveraging the human intelligence embedded in the Web. We have long been used to harnessing the unused processing power of millions of individual PCs for distributed computing projects like [email protected], and now ‘Crowdsourcing’ uses the Internet to form distributed labour networks exploiting the spare processing power of millions of human brains. The cost barriers that once separated the professional from the amateur have been eroded thanks to technological advances in everything from product design software to digital video cameras, and companies as disparate as pharmaceuticals and TV are taking advantage of hobbyists, part-timers and dabblers, with no need to worry about where they are, so long as they are connected to the network. This pool of cheap labour is fast becoming a real necessity, according to Larry Huston, Proctor and Gamble’s vice president of innovation and knowledge: ‘Every year research budgets increase at a rate faster than sales. The current R&D model is broken (but now) we have up to 1.5 million researchers working through our external networks’. Those external networks come in the form of websites like Amazon Mechanical Turk, which helps companies find people with a few minutes to spare to perform tasks computers are lousy at (identifying items in a photograph, perhaps). YourEncore helps companies find and hire retired scientists for one-off assignments and on innoCentive, problems are posted and anyone on the network can have a go at solving them. Invariably, any open call for submissions will illicit far more junk than it will genuinely useful answers. In fact, a rule of crowdsourcing states ‘the crowd produces mostly crap’. But then there is the rule ‘the crowd is full of specialists’, meaning people with the ‘right stuff’ to actually solve the problem. Just what counts as the right stuff varies from website to website. The tasks posted on Mechanical Turk could be taken on by anyone with basic literacy skills. On the other hand, sites like iConclude require professional expertise (in this case, expertise in troubleshooting server software). In all cases, the dispersed workforce need to be able to complete the job remotely, and the task cannot be too big because what crowdsourcing mostly taps into are those spare moments people have. Well, the OVERALL job may well be immense, such as compiling an online encyclopedia with tens of millions of entries. It doesn’t matter so long as the task can be divided up into micro chunks that people can have a go at if they have the time and inclination. Such tasks might involve correcting errors. Wikipedia enthusiasts quite enjoy ferreting out and fixing inaccuracies that appear on the encyclopedia. That’s one way to get around the problem of sorting the gems from the junk — let the crowd collectively hunt down the best material and correct/ eliminate the garbage. Another way is to install cheap, effective filters to separate the wheat from the chaff. But mostly the cost-effectiveness lies in the fact that the correct solution can be bought for a fraction of the cost it would take for an in-house R&D team to come up with the same solution, and that team would expect payment regardless of whether they solved the problem or not. In contrast, the crowd of ‘solvers’ on innoCentive are happy to provide services, knowing full well that if their solution is not selected they earn absolutely nothing.
One can well imagine how crowdsourcing will become more powerful as the technologies that offer ways to help computers organize online data improve. After all, the more capable the Web is at supplying answers to questions, or at tracking down that piece of information you require right now, or at bringing together the right combination of minds to collaborate on a problem, the more complex the puzzle that can be solved in the same timeframe. Also, more capable tools could shorten the amount of time required. One such example is ‘NanoEngineer 1’ an open-source CAD package for the design and modelling of atomically-precise components and assemblies. According to Damien Gregory Allis,
prior to NE1, any one of my images involved 3-5 hours… to make sure things with surfaces were within Van der Waals radii contacts, etc. I remember vividly the 1st NE1 board meeting where Mark Sims, in 30 seconds, had generated a Drexler/Merkle bearing from 2 repeater units.
The omninet, then, could be a tremendously powerful driver of education, training both human and machine intelligence to tackle increasingly complex problems. A common criticism regarding the prospect of artificial intelligence is to point out that humanity has been trying to build robots etc. that behave like people for a very long time and so far has had little success. Apparently, this is supposed to be adequate justification for believing the goal is fundamentally unreachable. One might imagine, though, that the extremely limited computational resources that AI researchers had to work with hampered their chances of success. A more important reason was the sheer lack of data concerning the thing they were trying to model — namely, the human brain.
GATHERING INFORMATION FOR BUILDING AI.
Things will be rather different as the Web expands outwards and merges with our physical environment. We will then be in the position to obtain tremendous amounts of data from all scales of human life. Starting at the widest viewpoint, a global network of discrete sensors will obtain information about the patterns of behaviour we exhibit as a species. This is not something that has to wait for ‘the future’ before it can begin, in fact our social behaviour is already being harnessed to provide insightful data. Tag-rich websites like Flickr and Del.icio.us allow users to create, share and modify their own systems of organization, and their collective activity results in data with structure usefully modelled on the way we think. It’s now generally accepted that the trend towards miniaturization will lead to further personalization for our computers, as they progress from desk or laptop devices to wearable items that wirelessly connect to the Web to access software applications, rather than store and run them as PCs do today. Because a person will be continually connected to the Web, it will be possible to obtain copious amounts of data concerning individual patterns of behaviour. Sensors will be able to record the tiniest details and smart software will use this information to tailor their services. For instance, we now know that those tiny movements the eyes always make (called ‘microsaccades’) are biased towards objects to which people are attracted, even if he or she is making efforts to avert the gaze. Today, companies like Microvision are working on eyewear that use lasers to draw images directly onto the retina for visual/augmented virtual reality. Perhaps that eyewear could also be equipped with sensors that monitor a person’s microsaccades and infer their object of interest. Another idea (one that’s actively being pursued by intel) is to use devices that can detect a person’s pitch, volume, tone and rate of speech. These change in predictable ways depending on our emotional state and social context. Even without understanding the meaning of the spoken words, monitoring and processing such audio information can reveal a lot about a person’s mind, situation and social network. Ultimately, the Omninet’s gaze may eventually focus right down on the workings of the brain itself, as biocompatible nanoscale transponders enable neuroscience to make millions of recordings per second of the brain as it processes information, thereby obtaining a working model of the brain performing its tasks. The Omninet’s sensors will be woven into all human biological networks, from the smallest scale of the brain’s neural net, to the largest networks of society itself. The Semantic Web will also create strong networks among the many scientific fields, furthering the collaboration that is essential for the task of coding general artificial intelligence.
In all probability, ‘artificial’ intelligence will get progressively more capable of performing more and more of our pattern-recognition-based forms of intelligence, as our technologically-enhanced ability to contribute to the growing knowledge-base enhances our understanding of the relevant principles. Still, for a while natural intelligence will possess abilities that AI still can’t cope with. It makes sense to tap into crowdsourcing and shunt out the parts of a problem that still require humans TO humans. Ubiquitous sensors and a semantic web will be just as useful for expanding our educational possibilities as it will be for building AI. We talked earlier about how crowdsourcing taps into the human network for workable solutions, but it’s worth remembering that even the many impracticable solutions contain useful information. For instance, they may reveal hidden prejudices and false assumptions that cloud our ability to ask the right questions. The Semantic Web would make it much easier to interlink any document from the roughest draft to the most polished final (but not necessarily accurate) article, and permanent access to an ever-present internet will provide a medium for capturing our ideas whenever inspiration strikes. Each point and rebuttal an idea generates would also be semantically-tagged, allowing anyone to see at a glance the direct agreements and contradictions and the supporting evidence for each view. Supported by machine intelligence, we will collectively trace back to the assumptions that were made and the data that was used, applying techniques like reductio ad absurdum to learn from our mistakes.
It’s reasonable to assume that the questions we ask will yield multiple answers, partly because there is famously more than one way to crack a nut, and partly because each person’s unique life experience leads to them framing and solving a problem in different ways. Harnessing the power of intercreativity (the process of making things or solving problems together) will stockpile solutions and multiple paths towards them, leading to a much richer form of education than the ‘one-size-fits-all’ methods we are currently limited to.
Narrowing our focus down to the individual, one problem with today’s Web is that it knows little about you, and therefore has no model of how you learn or what you do and don’t know. The rapid advances being made in storage, sensor and processor technologies will enable a person to automatically capture and record all the various forms of information one engages with, storing it in a personal digital archive. Everything about the user’s life will be logged and continually processed by machine intelligences, learning about user behaviour and interaction so as to deliver relevant information whenever it’s needed. This may not just be answers, but dynamically-generated explanatory paths that must be understood if the answer is to be illuminating. Our collective endeavour to create a global database that’s organized according to concepts and ways to understand them will generate lots of information about how concepts relate, who believes them and why, what they’re useful for and so on. A smart Web would find the most appropriate path between what you already know and what you need to learn. Your unique learning style will be understood by the Omninet, which will filter, select, and present information in the form of pictures, stories, examples, abstractions — the best and most meaningful explanation of what you need to know. The system will scrutinize explanations that don’t work or that tend to raise particular questions, using various forms of feedback to adjust its explanatory paths.
This won’t be a case of machines taking over the job of teaching, since the Web will harness both the power of networked computing platforms and people. The various explanations and paths connecting them will be created by human activity on the Web, for at least as long as machine intelligence is confined to ‘narrow’ AI. The job of narrow AI will be to present it in the best form such as chart, graphics, or natural-language text, thereby converting abstract concepts to the correct domain-specific language that is most appropriate for any particular user. Eventually, software agents will handle this task but in the meantime human elements will offer realtime assistance, providing the key knowledge, logic and pattern-recognition capabilities AI can’t yet handle.
As well as training AI to perform human capabilities, we should also find ways to enable people to better comprehend data more suited to machine intelligence. We have had some success at representing higher-dimensional mathematical models in a form that can be understood by our brains. Many of Maurice Escher’s paintings attempted to convey the mathematical landscape. We talked earlier about hyperbolic space in which modular forms exist. ‘Circle Limit IV’ embeds the hyperbolic world into the two-dimensional page. The program ‘Mathematica’ can produce 2-D geometric shapes associated with the different values of ‘n’ in Fermat’s equation. Each equation has its own shape, but one thing they share in common is that every single one is punctured with many holes. The larger the number of holes there are in the shape, the larger the value of ‘n’ in the corresponding equation. Before Fermat’s conjecture was proved, the fact that there must always be more than one hole helped Differential Geometrists to make a major contribution towards understanding Fermat’s Last Theorem.
Static images such as these use only a portion of the human visual system, which evolved to visualise a 3-D space that can change in time. Systems expert Dan Clemmenson wrote, ‘modelling mathematical problems frequently require a multi-dimensional model, which the software must collapse into a 3-D and a time-D that makes visual sense. Colour, intensity and texture can be used to represent aspects of the problem’.
No doubt, representing multi-dimensional models in a form we evolved to understand causes a great deal of information to be lost. The difference between these representations and the ability to actually comprehend hyperbolic space might be compared to the difference of seeing the world in full colour, as opposed to black and white. The vast majority of mammals, by the way, have pretty poor perceptions of colour, due to their having only two classes of colour-sensitive cells in the retina (technically known as ‘dichromatic’ vision). The exception to this rule are primates, whose retina are equipped with a third class of colour-sensitive cells, providing ‘trichromatic’ vision. Actually, thanks to biotechnology, a rat can now count itself among those animals blessed with trichromatic vision. Scientists reprogrammed its genes so that it would manufacture the extra class of cells. The surprising thing is that the rat’s brain was able to process this extra visual information, despite the fact that no rodent’s retina ever sent trichromatic visual data to it before. Rather than limit information to a form we evolved to work with, we could augment our senses and neural architecture in order to perceive that which is beyond our evolved capabilities. Examples of information that our brains are pretty poor at comprehending are the complex patterns that exist in financial, scientific and product data. Eventually, brain implants based on massively distributed nanobots will create new neural connections by communicating with each other and with our biological neurons, break existing connections by suppressing neural firing, add completely mechanical networks, allow us to interface intimately with computer programs and AI; vastly improve all our sensory, pattern-recognition and cognitive abilities.
SINGULARITY HAPPENS WHEN, EXACTLY?
This most definitely sounds like the stuff of wildest science fiction. We would surely have reached the Singularity by then. But remember that a lot of our current capabilities would seem pretty wild to our predecessors. When I say ‘predecessors’ I don’t mean ancestors far back in the mists of time, I mean within living memory of most of us. In the 1970s, chip designers took three years to design and manufacture integrated circuits with hundreds of components. Today, ICs are hundreds of times more complex, yet they take only nine months to go from concept to manufacture. Aided by CAD tools, a designer can click a change in a part and immediately see its effect on the rest of the design. Such a task would have taken a draughtsman several weeks. Moreover, the company Genobyte can ‘create complex adaptive circuits beyond human capacity to design or debug, including circuits exceeding best-known solutions to various design problems that has already been proven experimentally’.
‘Beyond human capacity to design’? That pretty much sounds like most people’s definition of ‘Singularity’, but products of genetic algorithms and other forms of automated design aren’t sending many people into the state of future shock brought about by the unfathomably complex. But then, at which point DO we start to see Singularity manifest itself in designs and products that are (in Arthur C. Clarke’s words) ‘indistinguishable from magic’? Bare in mind that we won’t just make one tremendous leap into a world where nanobots have massively upgraded our cognitive abilities and seamlessly woven our brains into the all-pervasive presence of the Omninet. IF that comes about, it will be as a result of many thousands of conservative goals reached by R+D labs working from the cumulative knowledge of our collective past experience. The Omninet, Semantic Web software agents, brain-machine interfaces, nanobot-based mind augmentation, all will be descended from a long line of technological descendants from previous rounds of modest goals, that step by step lead down to contemporary search engines, wi-fi hotspots, and neural prosthetics.
Eric Drexler remarked that the fact that electrical switches can turn one another on and off sounds pretty dull, but if you properly connect enough switches you have a computer. Once we had suitable computers (and other technology) we were ready to perform another experiment. On June 6th, 1969, the network control packets were sent from the data port of one IMP to another. Sounds rather ho-hum when you say it like that, but this was the first ever Internet connection. Once millions of computers were connected to the Internet, we had the dramatic consequence of the Web and all its applications. (Ok, they didn’t just spontaneously appear as a result of a critical number of connections being established. My point is that most people would not have realised the potential behind that first demonstration). Currently, as we have seen, businesses etc. are taking steps to add an additional ‘machine readable’ layer to web-pages, the Crowd are tagging sites with metadata and lots of other mundane stuff is going on from which a global brain, a planetary super organism, the EarthWeb collective, may emerge.
Is that The Singularity? It’s certainly one of the developments cited by Vinge as a reason for taking the idea seriously. ‘Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity’. Most discussions about Singularity, though, focus on his first (of four) possible causes: ‘There may be developed computers that are “awake” and superhumanly intelligent’. The classic definition of general artificial intelligence, in other words.
I suppose this scenario attracts the most attention, because it implies a situation in which The Singularity is easily identified as having happened. After all, if you have some machine intelligence sitting on your desk, offering solutions to problems and proposing new theories not yet reached by any human, you might well conclude that you are in the presence of a super intelligence. One slight problem with this scenario is the way it implicitly assumes the mind in the machine — while superior in its ability to perform mental tasks — is not altogether different to a human intelligence. But the whole notion of Singularity is founded on the thought experiment in which a mind capable of understanding the principle operations governing its intelligence, modifies and enhances its cognitive architecture, and then uses the resulting increase in ‘the smarts’ to further modify and enhance itself, getting better and better at upgrading itself as each upgrade makes it smarter and smarter. The idea that this intelligence will continue to entertain/educate humans with impressive displays of mental dexterity is one I find hard to fathom. If you think about it, as it approaches Singularity, a kind of cognitive event horizon, a mental blindspot, should manifest itself as the mind on the other side recursively self-enhances itself into an uncomprehending silence.
If life autoevolves to become information so efficiently encoded it looks for all the world like sunlight and dirt, that might explain why our attempts to find signs of cosmic intelligence picks up nothing but sunlight and dirt. As Stephen Witham said (with a nod towards Arthur C. Clarke) ‘any sufficiently advanced communication is indistinguishable from noise’.
I’m not, by the way, saying that the idea of a super-smart AI solving all longstanding questions for us is flat-out absurd. After all, maybe intractable problems such as the meaning of life, or what is consciousness, are as intuitively obvious to a smart enough intelligence as… well, as obvious as the fact that the dog’s scarily well-matched adversary is actually itself reflected in a mirror is to us. The problem then, of course, is how you could possibly explain the notion of Self to a dog, or the answer to life, the universe, and everything to a human, in a way that is not completely nonsensical to them. “What do you mean… 42’? Ok, I’m being a bit unfair by supposing we seek answers only to deep philosophical questions. Equally, the Mind might master the intricacies of the ageing process and manufacture an elixir of youth. It might absorb all knowledge regarding molecular manufacturing, quickly work out a practical path to get from where we are now to fully-working systems, and have them in our homes by the next day. What IS implausible, is the idea that stories about living in a world of eternal youth with every materialist whim instantly available are good approximations of what it’s like beyond the veil of the ultimate event horizon. No it is not. It’s nothing more than infantile fantasies of omnipotence.
Of course, sufficiently advanced genetics, robotics, information-technology and nanotechnology might provide us with means to break through the event-horizon. The other scenarios imagined by Vinge describe ways in which humans might increase their cognitive abilities until Singularity is achieved. As well as ‘large computer networks (and their associated users) may “wake up” as superhumanly intelligent entity’, there is ‘computer/human interfaces may become so intimate that users may reasonably considered superhumanly intelligent’, and ‘biological science may the means to improve natural intelligence’.
The scenario in which the Internet is nudged past a critical point of complexity and organization and becomes effectively a brain with some kind of meta-consciousness unfortunately introduces a significantly tough mental problem. It’s not so much the question of how on Earth we go about “waking up” the Internet, but how we could possibly know when such a thing has happened. This is a completely different proposition to the quest to build humanlike intelligence in robots. In that case, we can make a comparison between the AI and our deep and intuitive understanding of human behaviour. I don’t mean we can formulate what consciousness is, just that evolution adapted us to be naturally adept at recognising human traits. People can quickly spot ‘something odd’ about robot imitations, even if they cannot express just what it is that gives it away. But what kind of awareness would a global brain have? It seems extraordinarily unlikely that it would take on a form comparable to consciousness as I experience it (and as I assume other humans do). Since we have no idea what it looks like, does it not stand to reason that the phase-transition from mere network to meta-consciousness could happen without our realising it? And if that is the case, how can we be certain that the Web has not already ‘woken up’?
We can’t, plain and simple. But Vinge’s scenario assumes it is the network PLUS ITS USERS that collectively form the superintelligent entity. This sounds analogous to the emergent intelligence of social insects, or our own brains, in the sense that it involves many, many simple processes happening simultaneously, with interactions among the processes that create something substantially more complex than a reductionist study of the parts would reveal. But while it’s quite possible for a person to observe the overall activity of an ant’s nest, it’s extremely hard to imagine how one might go about studying the overall organization of a meta-consciousness distributed throughout a ‘brain’ the size of a planet. We would, after all, be down at the level of the ant or the neuron.
But even so, from my lowly perspective I can think of a good reason not to declare that the Web has achieved an intelligence worthy of the term ‘Singularity’. It treats knowledge and nonsense as equals, which seems far-removed from Nova Spivack’s description of ‘Earthweb’ as ‘a truthful organism. The more important a question is, the more forcefully it will search for the truth and the more brutally it will search for falsehoods’. Not so, today’s Web! Yes, one can easily access a wealth of information that significantly furthers understanding of any subject, but equally one could read worthless pseudo-science, corporate/political spin doctoring and junk conspiracy theories that, at best, leave you uninformed and, at worst, make you dangerously deluded.
But, hang on a second. We need to remember that if the Internet had organized itself into a Brain, we would be down at the very lowest levels of knowledge processing. Elizer Yudkowsky came up with a startling analogy that, while not actually conveying the subjective experience of being post-human, does convey some idea of the gap between such an intelligence and our own. ‘The whole of human knowledge becomes perceivable in a single flash of experience, in the same way that we now perceive an entire picture at once’. But this statement leaves out an intriguing fact discovered by neuroscience — that the brain knows about things the mind is unaware of. Needless to say, ‘knows’ is quite the wrong terminology. More precisely, functional brain scans show certain areas of the brain responding to relevant imagery. For instance, looking at an image of an angry face activates the Amygdala, a small part of the brain concerned with detecting threatening situations. Looking at happy or neutral expressions causes no activity in this part of the brain. Experiments have shown that if an image of an angry face is shown and then followed by an image of a happy or neutral face, so long as the interval between the two is less than about 40 msec the subject is completely unaware that the angry face was ever shown. Their amygdala, though, DOES respond to the image. So this, and many other experiments, prove that the brain is ‘aware’ of things the mind does not know about. Many social situations depend upon this ability. If you are at a party there is no doubt a great hubbub of conversations, music and general ambient noise. At the lower levels of audio processing the brain is responding to every sound, but your mind is able to filter out everything expect the speaker you are paying attention to.
If humans could not unconsciously filter out the environmental information gathered by the senses, life would be overwhelmingly complicated. You could argue, then, that knowledge is defined as ‘the intelligent destruction of information’. For animals, that means effectively and intuitively discerning between important and irrelevant sense patterns. We humans, though, have the additional headache of filtering our cultural, philosophical, scientific and theological conceptual frameworks. We are somewhat less capable of distinguishing between ‘junk knowledge’ and the true path to enlightenment. But what happens if we as individuals with PCs, or better still, wearable or implanted devices, become the functional neurons in a global brain? What if groups of connected humans become the various regions of such a brain? That being the case, we as the lowest levels of knowledge-processing would be ‘aware’ of a great amount of ideas, beliefs and concepts that the meta-consciousness does not need to know. Science and pseudo-science, truth and falsehood, accuracy and inaccuracy, all exist in the processes happening in the lower-levels of meta-consciousness that are intuitively filtered out by the higher levels until there is only an enlightened understanding of knowledge in the Mind.
But, hey, whoever said we must be content with our lot, and remain down at the lowest levels of knowledge-processing? To claim such a thing would run contrary to everything I said about how we are building a Web more capable of handling intercreativity. How, though, do we determine the moment when we have reached The Singularity? It’s important to remember that the term is not meant to imply that some variable blows up to infinity. Rather, it’s based on the observation that what separates the human species from the rest of the animal kingdom is the ability to internalize the world and run ‘what ifs’ in our heads. Knowledge and technology allow us to externalise and extend this capability. As Vinge said, ‘by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals’.
‘Are entering’? Why not ‘have entered’? Why is our current technology, which carries more communications per-second than human language has conveyed in 100,000 years, that has created largely autonomous processes that churn out designs beyond expert understanding, seen merely as a point on the road TO radical change, rather than the final destination? Probably because we can dimly glimpse a new generation of computers, robots, search software… so many improvements in so many areas, hinting that the REAL radical changes have yet to occur. But our growing technological infrastructure may be countered by the human adaptability to change, preventing us from appreciating how far we have come, and so leading to miscalculations when we try to determine how far we have left to go. ‘Computers still crash’, the naysayers are fond of pointing out, implying that while there have been undeniable advances in computer hardware, no such progress is apparent in software. But such a view fails to recognise that we task contemporary software with challenges that would have choked the supercomputers of former years. Pixar’s technology took many, many hours to render the frames for the CG movie ‘Toy Story’. Their later movies were crafted using new generations of hardware and software tools, but the time required to render each frame was never reduced by any significant degree. Is that because each generation failed to improve on its predecessors? Of course not! Pixar raised the bar, upped the ante, pushed their tools to the very edge of technical feasibility. Yesterday’s ‘impossible’ is today’s cutting-edge. Yesterday’s ‘incredibly difficult’ is today’s ‘moderately easy’; yesterday’s ‘moderately easy’ is now so intuitive that we are fooled into thinking the ‘little steps’ of progress taken today cover the same ground as the steps previous generations of technology achieved.
And what about this idea that ‘computer/ human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent’. Perhaps so, but when? What degree of cyborgization is necessary to achieve ‘Singularity’? Sure, you can invent a phrase like ‘Brain 2.0’, but that implies a boundary more clearly defined than the reality would turn out to be. For why would a person upgraded to ‘Brain 1.99’ not be possessed with cognitive abilities that appear to us with our version 1.00 brains as having ‘entered a regime radically different from our human past’? You could ask the same thing of a person whose brain is 1.80… 1.70… But obviously CONTEMPORARY brain implants (we might label these 1.01) do not give a person superhuman ability, nor do the brain implants existing in R+D labs right now, version 1.02.
But, you know what? To the people upgraded to brain 1.99 the next step towards 2.00 will seem just as conservative and sensible as the progress from today’s generation of implants to the next seems to us. In fact, you can bet that they will debate whether or not such a step is WORTHY of the term 2.0, just like today you hear some people argue that ‘Web 2.0’ is REALLY ‘1.whatever’. Perhaps the greatest fallacy of all is to treat Singularities as if they are a physical object that occupies a definite point in space and time. They do not. For while black holes may physically exist and occupy a definite location, the ‘singularity’ at their centre occupies no space except the gap created by incomplete knowledge. Once we have a working model of quantum gravity, the ‘singularity’ inside black holes and at the birth of the universe will vanish in a flash of mathematical clarity. As for the ‘Technological Singularity’, any sufficient increase in smartness brings into focus new questions that could not have been formulated before. As long as there are unanswered mysteries inhabiting the minds of curious entities, the question of whether or not the Singularity is Near will be asked and the best answer will continue to be ‘we do not know’.
Meanwhile, back in the here and now, we see some SL bloggers take up their position at one extreme or the other of the two philosophical systems. But it is those shades of grey smoothly blending the two philosophies in myriad ways that perhaps deserves closest scrutiny. Technologies that will allow a much more seamless blend between the Internet and our natural environments are reaching maturity, and the consequence of their integration in society will be a greatly diminished ability to distinguish between the core principles of augmentation and immersionism. The former group can look forward to new software tools and hardware that allow communication hitherto impossible outside of science fiction, while immersionists can rest assured that such technologies will not work half as well as they might unless the metaverse is populated by real digital people and mind children.
But that’s another essay.