SHADES OF GREY: An essay by Extropia DaSilva

Image taken by G. P. at Transumanar.com
Image taken by G. P. at Transumanar.com

INTRODUCTION.

Question. What connects Alan Watts, Richard Dawkins and Henrik Bennetsen? The answer is, they have all written about the human need to make distinctions and separate things into classes. In ‘The Two Hands Of God’, Watts wrote, ‘from the standpoint of thought, the all-important question is ever, “is it this, or that?”. By answering such questions, we describe and explain the world’. Richard Dawkins pointed out that ‘many of our legal and ethical principles depend on the separation between homo sapiens and all other species’.

And Henrik Bennetsen? He wrote about the two philosophical systems that neatly divide the residents of Second Life ®. You are either an Immersionist, or an Augmentationist.

But look more closely at what these people wrote. While all three identified various ways in which we draw distinctions, they also argued that reality is often not like that. Alan Watts cautioned, ‘in nature herself there are no classes… The importance of a box for thought is that the inside is different to the outside. But in nature the walls of a box are what the inside and the outside have in common’. Richard Dawkins, meanwhile, explained how we can only talk about ‘species’ because so many forms of life have gone extinct and fossil records are so incomplete. ‘People and chimpanzees are linked via a continuous chain of intermediates and a shared ancestor… In a world of perfect and complete information, fossil information as well as recent, discrete names for animals would be impossible’. And while Bennetsen did give his essay the title ‘Augmentation versus Immersion’ and various other bloggers have referenced it when writing about clashes between incompatible beliefs in SL, it seems to have been forgotten that he wrote, ‘I view these two philosophies placed at opposite ends of a scale. Black and white, if you will, with plenty of grey scales in between’.

I think this remark applies to many distinctions, such as ‘natural’/’artificial’; ‘actual’/’virtual’ and ‘person’/’machine’. These distinctions, arguably, are no more grounded in reality than the separation of life forms into species. Furthermore, while the illusion that humans are a distinct species separate from all other animals was brought about by past events (those events being extinctions and the destruction of fossils via geological activity), one can dimly glimpse how current research and development in Genetics, Robotics, Information technology and Nanotechnology might result in a future where it no longer makes sense to distinguish between the natural and the artificial; the actual and the virtual. The consequence of this will go much further than making all those essays about ‘immersionism versus augmentationism’ seem nonsensical to future generations. It also suggests that a technological singularity could happen without anybody noticing.

To understand the reasoning behind both of those suggestions, we need to take a wider view than just the ongoing creation of Second Life. It is, after all, a virtual world existing within a much larger technological system, namely the Web. As we progress through the 21st Century, what is the Web becoming?

THE GOSPEL ACCORDING TO GIBSON.

Transhumanists and related groups tend to imagine that the arrival of the Singularity will be unmistakable, pretty much the Book of Revelation rewritten for people who trust in the word of William Gibson, rather than St. John the Divine. Is this necessarily the case? I would argue that, if the Singularity arrives on the back of ‘Internet AI’, the transition to a post-human reality could be so subtle, most people won’t notice.

The transition from Internet to Omninet (or global brain, or Earthweb, or Metaverse, choose your favourite buzzword) has at least three trends that might conspire to push technology past the Singularity without we humans noticing. The first trend, networking embedded computers using extreme-bandwidth telecommunications, will make the technological infrastructure underlying the Singularity invisible, thanks to its utter ubiquity. Generally speaking, we only notice technology when it fails us, and it seems to me that, before we can realistically entertain thoughts of Godlike AI, we would first have to establish vast ecologies of ‘narrow’ AIs that manage the technological infrastructure with silent efficiency.

The second trend is the growing collaboration between the ’human-computing layer’ of people using the Web, and the search software, knowledge databases etc. that are allowing us to share insights, work with increasingly large and diverse amounts of information, and are bringing together hitherto unrelated interests. Vinge noted that ‘every time our ability to access information and communicate it to others is improved, in some sense we have achieved an increase over natural intelligence’. The question this observation provokes is ‘can we really pinpoint the moment when our augmented ability to access information and collaborate on ideas is producing knowledge/technology that belongs in the post-human category’? Finally, if the Internet is really due to become a free-thinking entity, loosely analogous to the ‘organism’ of the ant colony, would we be any more likely to be aware of its deep thoughts than an ant appreciates the decentralized and emergent intelligence of its society?

Looking at the first trend, there’s little doubt that we are rapidly approaching an era where the scale of information technology grows beyond a human’s capacity to comprehend. The computers that make up the US TeraGrid have 20 trillion ops of tightly integrated supercomputer power, a storage capacity of 1,000 trillion bytes of data, all connected to a network that transmits 40 billion bits/sec. What’s more, it’s designed to grow into a system with a thousand times as much power. This would need the prefix ‘zetta’ which means ‘one thousand billion billion’, a number too large to imagine. Then there is the prospect of extreme-bandwidth communication. ‘Wavelength Division Multiplexing’ allows the bandwidth of optical fibre to be divided into many separate colours (wavelengths, in other words), so that a single fibre carries around 96 lasers, each with a capacity of 40 billion bits/sec. It’s also possible to design cables that pack in around 600 strands of optical fibre, for a total of more than a thousand trillion bits per second. Again, this is an awesome amount of information that is being transmitted.

These two examples represent two of the four overlapping revolutions that are occurring, thanks to the evolution of IT. The first of these, the growth of dumb computing, is referred to as James Martin as ‘the overthrow of matter because it stores such a vast number of bits and logic in such a small space’. It was not so long ago that futurists were making wild claims about a future web with 15 terabytes of content. That’s not so impressive compared to Google’s current total database, measured in hundreds of petabytes, which itself now amounts to less than one data centre row.

The second revolution is the ‘overthrow of distance’, a result of fibre-optic networking and wireless communication. These revolutions will ultimately converge on a ‘global computer’ that embraces devices spanning scales from the largest to the smallest. Data centres sprawling across acres of land, acting as huge centralized computers comprised of tens of thousands of servers. Optical networks will transport their data over vast distances without degradation. Today, many of the duties once delegated to the CPU in your PC can now be performed on web-based applications. Current research, inscribing lasers on tops of chips, and the aforementioned all-optical networks, will radically decentralize our computing environment, as the Omninet embraces handheld communicators and receives data from ubiquitous sensors no larger than specks of dust. As George Gilder put it, ‘(the Omninet) will link to trillions of sensors around the globe, giving it constant knowledge of the physical state of the world’.

The human species has two abilities that I marvel at. The first is that, collectively, we are able to bring such radical technology out of vapourware, into R+D labs, and eventually weave it into the fabric of everyday life. The second is that, as individuals, we become accustomed to such technology, to the extent that it becomes almost completely unremarkable, as natural as the air we breathe. This latter trait may play a part in ensuring the Singularity happens without us noticing. It’s commonly believed that its coming will be heralded by a cornucopia of wild technology entering our lives, and yet today technologies beyond the imagination of our predecessors are commonplace. It can make for amusing reading to look back on the scepticism levelled at technologies we take for granted. A legal document from 1913 had this to say about predictions made by Lee De Forest, designer of the triode, a vacuum tube that made radio possible: ‘De Forest has said… that it would be possible to transmit the human voice across the Atlantic before many years. Based on these absurd and deliberately misleading statements, the misguided public… has been persuaded to purchase stock in this company’.

To get an idea of just how much attitudes have changed, consider the research that shows users of search engines are satisfied with results delivered within a twentieth of a second. We grow impatient if we’re made to wait much longer. In 1913, the belief that the human voice could be transmitted across vast distances was laughed off as ‘absurd’. In 2007, we have what amounts to a computer-on-a-planet, allowing not only global voice communication but near instantaneous access to pretty much all knowledge, decentralized communities sharing text, images, music and video and even entire online worlds where you can explore every possible facet of self. Our modern society is clearly filled with technology beyond the ‘absurdity’ of trans-atlantic voice communication, so why are we not in a profound state of future shock?

Well, recall the difference between ‘visible’ and ‘invisible’ innovations. Radio waves transmitting voice across the ocean almost instantaneously, actually TALKING to someone on the other side of the world as if they were IN THE SAME ROOM was truly unprecedented. On the other hand, chatting on a mobile phone or online via Skype are just variations on established innovations. In the future, we may have homes fitted with white light LEDs, replacing incandescent light bulbs. This would provide low energy light, and unlike existing light sources it could be readily adapted for optical wireless broadband internet access. Again, I could cite the advantages that this would have over current wi-fi and other radio wave-based wireless. I could also play devil’s advocate and cite all the technical challenges that must be overcome before it is practical. But how much of this will be noticed by the user when they connect wirelessly to the web, as many of us do now? There is nothing here that is startlingly innovative, not any more. It’s now utterly unremarkable that we can flood our homes with light at the flick of a switch, that we have electricity whenever we need it, that the airwaves are filled with radio, TV and telecommunication. It’s all so thoroughly woven into the fabric of our society that it is invisible. We only really appreciate how much we depend upon it on those rare occasions when we hit the ‘on’ button and, thanks to an equipment or power failure, nothing happens.

Computers, though, are still not quite as ‘invisible’ as the TV set is, and that’s because they are not yet ‘plug and play’. I think most people switch on the computer, half expecting it to not boot, fail to connect to the Internet, drop their work down a metaphorical black hole and so on. But it’s certainly the case that modern PCs are vastly easier to use than those behemoth ‘mini’ computers of decades ago, despite the fact that, technically speaking, they pack in orders-of-magnitude more power and complexity. Miniaturization and ease-of-use are both factors in the personalization of computing, and technophiles have plenty of metaphors to describe the natural end-point. Wired’s George Johnson wrote, ‘today’s metaphor is the network… (it) will fill out into a fabric and then… into an all pervasive presence’. Randy Katz explained, ‘the sea is a particularly poignant metaphor, as it interconnects much of the world. We envision fluid information systems that are everywhere and always there’.

In other words, a time when the Internet becomes the Omninet, cyberspace merges with real physical space and simply… vanishes, having become so completely woven into the fabric of society and individual lives we forget it is there. Most people, I think, believe that there is the natural world, consisting of all that is biological, and then there is the artificial world, to which belong products of technology. These two worlds are distinct… or at least they are until you give it some thought. When we use snares, or nets, or bolas, we consider these to be tools and therefore products of the artificial world of technology. But when spiders use their silk to construct snares, or to use like a gladiator uses a net, or something so like a bolas that this particular arachnid is known as a ‘bolas spider’, in which category do these functional items belong? I suppose a difference between spiders’ various webs and our analogous tools is that the silk is produced by the spider itself, and so could be considered to be just as much a part of its body as its legs or eyes. But other animals make use of discarded items they stumble across, like hermit crabs which crawl into discarded shells. This is simply re-using the shell’s original ‘purpose’, of course, but beavers fell trees to use as raw building materials for their dams and lodge. When we build dams or erect skyscrapers, these feats of engineering seem incongruous in a way a beavers’ dam or termite mound is not. Yet, in what sense are these not engineering/architectural projects as well?

CC BY 4.0 SHADES OF GREY: An essay by Extropia DaSilva by Extropia DaSilva is licensed under a Creative Commons Attribution 4.0 International License.

About Extropia DaSilva

Taking today's technological proof-of-principles and theoretically expanding their potentials to imagine Sl-meets-The-Matrix is my bag, baby!

One Pingback/Trackback

  • The trend that human technology does not really change our society by “leaps and bounds” (although when reading history books it looks like that), but strangely mimics evolution (“continuous flowing of forms transition one into another”) is quite dear to me — since I always hated the “black”/”white” concepts that just bipolarise a discussion without reflecting reality.

    Yes, one of the things I was always critical about transhumanist teachings was the issue that in “the distant future” (or perhaps… tommorrow) the Singularity would emerge and demand that all humans bow to it in worship.

    Clearly that doesn’t reflect — at all — how we humans shaped our society. Yes, lasers were created in the 1960s and they were highly advanced precision instruments, and people thought they would change society (probably by being used as the “ray guns” seen on bad sci-fi movies). They certainly did. They now cost about US$1.5 or so and are found inside billions of CD players everywhere in the world. It’s so commonplace that people have no idea that they carry around a coherent light emission device in their Sony Walkmans — something deemed “impossible” in the days of Maxwell, and “highly unlikely” (or “the product of a very advanced civilisation”) in the 1960s. The same could be said about personal computers, mobile phones, or the Internet.

    But it was all gradual change. Nobody in 1969 thought that people would be using the Internet on their iPhones just a generation and a half later. The concept would be completely insane; you’d be locked up in padded cells, all your sci-fi books taken away from you, and sedatives given to you to make you sleep in peace and don’t talk nonsense any more.

    I remember Bill Gates “Information at your fingertips” motto of the 1990s with a smile. I thought that, well, having encyclopaedias in CDs would be nice, but even projecting ahead in time, it would be hardly possible to have, say, the Library of Congress inside a laptop. I couldn’t be more wrong — only a few years afterwards, we got both the Wikipedia and Google insanely collecting “information at my fingertips” from a staggering amount of data spread across the whole planet. But these days we just take it for granted. There is no more “wow” sense. People log in to Google and see better results from profiling (which Google will do; try to search while you’re logged in and logged off and see the difference). They are getting used to Google’s “hints” saying: “do you wish to search for XXX instead?” In some years, Google will even be more clever, and figure out things better and better, and we’ll take it for granted. One day we’ll talk to our eleventh-generation iPhones and talk to them asking to search for the closest available Gap shop and see if they have anything on sale. This will be just commonplace; people will not even understand when we’ll point out that in 2008 that kind of technology was a dream but certainly possible.

    Oh yes, I have no idea if Google is ‘sentient’ 🙂 or when it will be. What I’m pretty sure is that once Google becomes ‘sentient’ (or once we’re aware of its sentience), we’ll not lose nights of sleep over it. It’ll be common. By that time — not that far ahead, a few decades at most — we’ll be used to e-butlers squeaking into our ear implants things like “You’ve got an incoming call from your mother-in-law, do you wish to take it or should I just tell her that you’re busy reading Extropia’s essays?”. We can only smile at these ideas today, but in 2030 or so, kids will just ask us “mommy, when you were a teen, how did you know where your friends were, if you didn’t have GPS on your mobile phones showing green dots for them?” Or when exactly will we tell them that there was a pre-Internet and a post-Internet date? Certainly 1969 launched the Internet as a technology; and certainly it was completely mainstream by 2000. But when exactly can we say that the Internet became a fundamental part of our society? (Computer science historians like to say that it was the day Bill Gates realise that they couldn’t fight against the Internet any more — placing the date in September 1995, when Microsoft embraced the idea that they would also become “the Internet company” and not aggressively fight against it any more. But that date is as artificial as anything else; millions of people already used the Internet before Sep. 1995, and billions just started to use it ten years later)

    The concept of “the approaching Singularity” does not really convince me. On the other hand, just going through it without realising we’ve done so, is quite appealing. In fact, and quoting you again, human technology innovation mimics closely the evolution of the species on Planet Earth: we go from form to form, shape to shape, with all steps in between, but there is no real way to place your finger on a specific moment in time and say: “that’s a wolf; that’s a dog”.

    There are really only shades of grey, and thinking otherwise is just fooling yourself.

    Then again… our brains are insanely good pattern-matchers, and almost as good at labelling and classifying things. It’s so uncanny to see how we’re almost bipolar in those two conflicting trends. We experience black and white and label it accordingly; but when we actually measure both extremes we see the shades of grey in-between. Our brains deal with both and mix the approaches — and I guess that’s why we’re keep pretending that black and white do exist.

  • Orfeu Miles

    Another stimulating essay !!

    Your essays often use a similar form.

    1. Where we are at now.

    2. Use of the word “increasingly” or “progressively”

    3. Bingo !! The Singularity !!!

    Of course, this may well be true, although I remain to be convinced that more data leads to more intelligence.

    It kinda reminds me of the difference between Scales and Music.

    Music is made up of scales, each with their own laws and states of tension and release with each other. They are the data building blocks of music.( along with sundry others, Rhythm, Harmony, Timbre etc.)

    Ok, lets see if this works.

    1. Millions of Pianists across the world are practising scales.

    2. Due to technological progress, they “increasingly” improve their technique. They “Progressively get faster and smoother at their execution of playing scales.

    3. Bingo !!. Rachmaninoff’s 2nd Piano Concerto !!

    Of course Sergei managed to do exactly this, without the aid of Google. 🙂

    The practising of Scales, may indeed have helped him.
    But it was not the Prime Mover, in terms of the unleashing of his creativity.

    The most persistent problem at the moment, is seperating the Signal from the Noise,and here we enter a largely subjective realm, where my signal may be your noise.

    What our hypothetical AI considers to be Signal rather than Noise, remains the fascinating if unknowable question.

  • Extropia DaSilva

    ‘I remain to be convinced that more data leads to more intelligence’.

    More data would not lead to intelligence. This is almost like the claim that computers will become intelligent if only we can increase their raw power; that Moore’s Law progresses until…!!Bingo!!..computers are intelligent. Although Ray Kurzweil is often portrayed as believing this by AI skepics, it is not actually his position. He has stated several times that the continuation of Moore’s Law is a necessary but not sufficient condition for creating artificial general intelligence.

    More data will not lead to intelligence, but what about more of the right kind of data? What about improvements in our ability to study living brains and extract the salient details as they process information? According to Kurzweil, ‘The pace of working models and simulations is only slightly behind the availability of brain-scanning and neuron structure information’.There are over 50,000 neuroscientists in the world writing articles for more than three hundred journals, as well as scientists and engineers involved in the development of new and improved scanning and sensing technologies, and the likes of Google constantly seeking ways of finding high level knowledge amongst low level information.

    OK, so once again I have gone and used the word ‘improvement’ a lot. Admittedly, it is probably true that a lot of theories regarding the emergence of intelligence are innacurate. But the fact that there are so many different approaches to artificial intelligence can be seen as positive from an evolutionary perspective. After all, nature throws up many possible solutions to any given problem and then weeds out the less effective approaches.

    If information technology can be developed so that humans and computers become better at searching probability space for effective solutions to any given problem, I really don’t see why there wouldn’t be progress towards currently unsolved goals that nature has shown are solvable in principle.

  • Orfeu Miles

    ” and the likes of Google constantly seeking ways of finding high level knowledge amongst low level information.”

    Well…..um…….. ok.
    In which case bring on the implants, because I swear, the more I use google………the stupider I get.

    ” I really don’t see why there wouldn’t be progress towards currently unsolved goals that nature has shown are solvable in principle.”

    Agreed. !!

    “More data will not lead to intelligence, but what about more of the right kind of data? ”

    “extract the salient details”

    These are indeed the pressing issues.
    I merely note in passing, that these also involve subjective as well as objective judgements.

    I do not mean to be overly querulous, in fact your essays have always offered much food for thought.

  • Extropia DaSilva

    ‘bring on the implants, because I swear, the more I use google………the stupider I get’.

    Google is great for finding the answers to questions like ‘when was the first chess playing computer invented’. Actually, I do not know so let us ask it…

    Ok, so the first analogue chess computer was invented in 1911 (or 1912, historians are not sure), and the first chess playing program to run on computers as we know them was invented in 1951 by Dr. Dietrich Prinz.

    Now, I could commit that information to memory, but why should I bother? After all, Google is so much more efficient at recalling such facts; I might as well just ask it to remember for me whenever I need to know.

    What if Google was with me wherever I went, whispering exposition in my ear, displaying answers in my visual field? From all outward appearances, I would have expansive knowledge, but do I really KNOW anything? This is not a new anxiety, because Socrates said of the then quite recent invention of the alphabet,

    ‘(It) will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality’.

    (Yes, I asked Google to remember that quote for me).

    Obviously, for a lot of pattern recognition tasks you are much better off using your own brain because AI is just so profoundly inferior. No way would I ask an AI to identify objects like ‘tables’ or ‘chairs’, my brain does a great job, thanks. And while Google translator is OK, it is clearly still advantageous to learn foreign languages.

    But, what if those pattern-recognition capabilities are aquired by search engines, and then coupled with their ability to remember at electronic speeds? What if Google was BETTER at putting names to faces, translating languages, recalling the last place I left my keys…Would I then come to rely on it to do ALL my recollection for me? Would a large portion of my mind exist in the Cloud, leaving my old meatbrain largely empty and unused?

    “extract the salient details”

    ‘These are indeed the pressing issues’.

    Right. Sounds easy, but in practice it is very difficult indeed. Whenever you hear somebody claim AI will be on a par with humans within a few decades, you can be sure that person is not a neuroscientist (s/he is probably a roboticist or something like that). While some neuroscientists freely admit the brain will be reverse engineered into software some day, I know of no neuroscientist who expects that to happen in under a generation.

    ‘I do not mean to be overly querulous’.

    Aww! I like your comments:)

  • Orfeu Miles

    Hmmm it seems I am not alone in my fears of google giving me the concentration of a mayfly.

    At the risk of undercutting my own argument somewhat………..I found this on Google. 🙂

    “Is Google making us Stupid.”

    http://www.theatlantic.com/doc/200807/google

  • You said:

    “Generally speaking, we only notice technology when it fails us.”

    “Looking at the first trend, there’s little doubt that we are rapidly approaching an era where the scale of information technology grows beyond a human’s capacity to comprehend.”

    “Computers, though, are still not quite as ‘invisible’ as the TV set is, and that’s because they are not yet ‘plug and play’. I think most people switch on the computer, half expecting it to not boot, fail to connect to the Internet, drop their work down a metaphorical black hole and so on.”

    These all resonate with me, in terms of my technology experiences! It is hard to be immersed when your computer crashes repeatedly in the middle of a SL session!

    Great essay, and some wonderful replies. I really enjoyed reading this. I need more of this, and less time reading People magazine on line! (at least I have made the break from television. We don’t have cable. Haven’t had it since late 2005. We only use the tv to play video games, and sometimes watch a movie.)

    Princess Ivory

  • Pingback: The Dirven Factor 2 |()