Google and The Red Queen – An Essay By Extropia DaSilva

DIGITAL GAIA

How might pattern recognition capabilities like this be achieved? In Permutation City, Greg Egan suggested one possible approach:

“With a combination of scanners, every psychologically relevant detail of the brain could be read from the living organ — and duplicated on a sufficiently powerful computer. At first, only isolated neural pathways were modelled: Portions of the visual cortext of interest to designers of machine vision”.

There is actually quite a lot of real science to this fiction. Not so long ago, Technology Review ran an article called ‘The Brain Revealed’ which talked about a new imaging method known as ‘Diffusion Spectrum Imaging’. Aparrently, it “offers an unprecedented view of complex neural structures (that) could help explain the workings of the brain”.

Another example would be the research conducted at the ITAM technical institute in Mexico City. Software was designed that mimics the neurons that give rats a sense of place. When loaded with this software, a Sony AIBO was able to recognise places it had been, distinguish between locations that look alike, and determine its location when placed somewhere new.

IBM’s Blue Brain Project is taking the past 100 years’-worth of knowledge about the microstructure and workings of mammalian brains, using that information to reverse-engineer a software emulation of a brain down to the level of the molecules that make it up. Currently, the team have modelled a neocortical column and have recreated experimental results from real brains. The column is being integrated into a simulated animal in a simulated environment. The purpose of this is to observe detailed activities in the column as the ‘animal’ moves around space. Blue Brain’s director (Henry Markram) said, “it starts to learn things and remember things. We can actually see when it retrieves a memory, and where it comes from because we can trace back every activity of every molecule, every cell, every connection, and see how the memory was formed”.

Eugene M. Izikevich and Gerald M. Edelmen of the Neurosciences’ Institute have designed a detailed thalamacortical model. This is based on experimental data gathered from several species: Diffusion tensor imaging provided the data for global thalamacortical anatomy. In-vitro labelling and 3D reconstructions of single neurons of cat visual cortex provided cortical micro circuitry, and the model simulates neuron spikes that have been calibrated to reproduce known types of responses recorded in-vitro in rats. According to  Izikevich and Edelmen, this model “exhibited collective waves and oscillations…similar to those recorded in humans” and “simulated fMRI signals exhibited slow fronto-parietal multi-phase oscillations, as seen in humans”. It was also noted that the model exhibited brain activity that was not explicitly built in, but instead “emerged spontaneously as the result of interactions among anatomical and dynamic processes”.

This kind of thing is known as ‘neuromorphic modelling’. As the name suggests, the idea is to build software/ hardware that behaves very much like biological brains.  I will not say much more about this line of research, as I have covered it several times in my essays. Let us look at other ways in which computers may acquire the ability to perform human-like pattern-recognition capabilities.

Vernor Vinge made an interesting speculation when he suggested a ‘Digital Gaia’ scenario as one possible route to super intelligence: “The network of embedded microprocessors becomes sufficiently effective to be considered a superhuman being”.

There is an obvious analogy with the collective intelligence of an ant colony. The world’s leading authority on social insects — Edward Wilson — wrote, “a colony is a superorganism; an assembly of workers so tightly-knit… as to act as a single well-coordinated entity”.

Whenever emergence is mentioned, you can be fairly sure that ant colonies will be held up as a prime example of many simple parts collectively producing surprisingly complex outcomes.

Software designers are already looking to ant colonies for inspiration. Cell-phone messages are routed through networks using ‘ant algorithms’ that evolve the shortest route. And Wired guru Kevin Kelly forsees “hundreds of millions of miles of fiberoptic neurons linking billions of ant-smart chips embedded into manufactured products, buried in environmental sensors”.

When talking about ‘Digital Gaia’ we need to consider two things: hardware and software. On the hardware side of things, we need to consider Moore’s Law and Kurzweil’s Law Of Accelerating Returns. The latter is most famously described as ‘the amount of calculations per second that $1,000 buys doubles every 18-24 months’, but it can also be expressed as: ‘You can purchase the same amount of computing power for half the cost every 18-24 months’. Consider those chip-and-pin smart cards. By 2002 they had as much processing power as a 1980 Apple II. By 2010 they will have Pentium class power. Since the same amount of computing power can be bought for half the cost every 24 months or so, this leads to the possibility of incorporating powerful and once-expensive microprocessors into everyday objects.

Of course, hardware is only half of the story. What about software? I would like to quote at length from comments made by Nova Spivak, concerning the direction that the Web as a whole is taking:

“Web 3.0… will really be another push on the back end of the Web, upgrading the infrastructure and data on the Web, using technologies like the Semantic Web, and then many other technologies to make the Web more like a database to enable software to be smarter and more connected…

…Web 4.0…will start to be much more about the intelligence of the Web…we will start to do applications which can do smarter things, and there we’re thinking about intelligent agents, AI and so forth. But, instead of making very big apps, the apps will be thin because most of the intelligence they need will exist on the Web as metadata”.

One example of how networked sensors could aid technology in working collaboratively with humans is this experiment, which was conducted at MIT:

Researchers fitted a chair and a mouse with pressure sensors. This enabled the chair to ‘detect’ fidgeting and the mouse to ‘know’ when it was being tightly gripped. Furthermore, a web cam was watching the user to spot shaking of the head. Fidgiting, tightening the grip and shaking your head are all signs of frustration. The researchers were able to train software to recognise frustration with 79% accuracy and provide tuition feedback when needed.

Or think about how networked embedded microprocessors and metadata could be used to solve the problem of object recognition in robots. Every object might one day have a chip in it, telling a robot what it is and providing location, orientation and manipulation data that provides the robot with instructions on how to pick up something and use it properly.

‘Digital Gaia’ could also be used to help gather information about societies and individual people, which could then be used by search-engine companies to fine-tune their service. Usama Fayyad, Senior Vice President of Research at Yahoo, put it like this: “With more knowledge about where you are, what you are like, and what you are doing at the moment… the better we will be able to deliver relevant information when people need it”.

We can therefore expect a collaboration between designers of search software and designers of systems for gathering biometric information. A recent edition of BBC’s ‘Click’ technology program looked into technology that can identify a person from their particular way of walking. Aparrently, such information is admissible as evidence in British courts. You can imagine how Google might one day identify you walking through a shopping mall, and target advertisement at you. ‘Minority Report’, here we come!

CC BY 4.0 Google and The Red Queen – An Essay By Extropia DaSilva by Extropia DaSilva is licensed under a Creative Commons Attribution 4.0 International License.

About Extropia DaSilva

Taking today's technological proof-of-principles and theoretically expanding their potentials to imagine Sl-meets-The-Matrix is my bag, baby!

  • >And what of mind uploading

    >This knowledge is revealing flaws in the common conception of self. Traditionally (in the West at least), the self has been attributed to an incorporeal soul, making “I” a fixed essence of identity. But neuroscience is revealing the self as an interplay of cells and chemical processes occurring in the brain — in other words, a transitory dynamic phenomena arising from certain physical processes. There seems to be no particular place in the brain where the feeling of “I” belongs, which leads to the theory that it is a number of networks that creates aspects of self.

    Gosh, you had me right up until the end, there, Extropia, I thought it was a story about Communism, what with the Red Queen and all, but instead, it’s a story about Fascism!

    Glad we got that sorted!

    Prokofy Neva
    Director, Society for the Pluralarity, NE Chapter
    Corresponding Member, Association for Neuronic Coherence
    Secretary, Movement for the Promotion of the Feeling of “I”
    For Our Freedom, but…not yours, with totalitarian ideologies like this! Yikes!

  • Extropia DaSilva

    Prok, I am a bit surprised at the passages you chose to quote. I expected people to take issue with the idea of dust-sized sensors here, there and everywhere, exhaustively monitoring the daily activities of groups and individuals, and I also expected people to have a negative opinion of using neuroscience to reverse-engineer the brain’s perception of value in order to make more effective advertisements. I do not know if such things are ‘communist’ or ‘fascist’, but I can appreciate that some people may not like the idea of technologies like that.

    But what the passages you quoted have to do with any political ideology has quite escaped me. I must be missing something obvious, would you care to elaborate on why the move away from notions of a fixed essence of identity towards the self as a dynamic phenomena is ‘fascist’?

  • You’re not someone I’m interested in engaging with, Extropia, because I view you as essentially someone who is certifiably insane.

    Fascists and communists and other totalitarians try to disrupt and disintegrate the integrity of the individual in order to beat a person down, break them, and take them over. Locke, for example, always spoke of the persistence of the self across thinking sessions, if you will. All the great classics and liberal thinkers have always talked about the dignity of the individual as a whole and integrated being, whatever divisive motives, thoughts, impulses might occur within this integral being. Those ideologies that try to make the individual seem like a bundle of chemicals, nerve endings, societal constructs, blah blah, are reductivist and of course trying to justify taking political power over the individual. Divide and conquer.

    Of course the self as “dynamic” is fascist because it implies that the individual isn’t himself, isn’t real, isn’t whole, isn’t sovereign, and therefore this or that piece of him, this or that “I” or collection of feelings or mechanical actions can simply be taken over — by code, groups, institutions, chemistry, science, whatever – ostensibly for his “betterment”.

    If you have to explain the problem of the individual and fascism at this basic a level, you can’t talk to a person normally, as they are not speaking in good faith, or are so abstracted from common sense as to be really delusional. I think in your case, it’s more the latter, but both are operative. Gwyn’s indulgence of you makes her suspect.

    Good bye.

  • I’m fascinated how you can jump from philosophy into ideology by using the “self” as an example. If I read you correctly, any form of definition of the self that is based on the notion that the self is correlated to external experiences (in the sense that it takes groups of people to co-validate their sense of self; thus, “self” is not merely what you think as “self”, but what all others agree upon what your self is), leads to totalitarianism (either communist or fascist).

    So all social constructs based on altruism and inter-relationships lead to totalitarianism?

    On the other hand, the egotistical approach where self is an isolated phenomena that requires self-pleasing at the expense of others, leads to liberal societies.

    Hmm. It’s worth thinking about.

    And of course, if you wish to “suspect” me of believing in the fundamental altruistic and compassive nature of human beings, I’m guilty as charged!! If that leads to totalitarianism, I have no idea, but I can tell you that I have been taught otherwise 🙂

  • Extropia DaSilva

    …so, Prokofy, you respond to my reply with “You’re not someone I’m interested in engaging with, Extropia”. Uhuh. So your response is, you do not intend to respond. And then you go ahead and respond anyway. Oh, well, I cannot complain since my essay argues that minds are not fixed, but dynamic, fluid and changeable:)

    “Fascists and communists and other totalitarians try to disrupt and disintegrate the integrity of the individual in order to beat a person down, break them, and take them over.”

    Yes, something along these lines is aparrent in the last few chapters of Orwell’s ‘1984’, in which- through torture and bonkers philosophical arguments- O’brien strips Winston Smith of his identity and remoulds him into a perfect citizen of Oceania. At one point, O’brien declares “reality exists in the human mind, and nowhere else. Not in the individual mind, which can make mistakes and in any case soon perishes: only in the mind of the party, which is collective and immortal”.

    I find it hard to believe that this is an accurate assessment of objective reality. But, when it comes to a virtual world like SL I think it works, up to a point. After all, any virtual world exists by virtue of the people who bring their imaginations to it, and use artifacts designed to be cognitive extensions to add to the accumulating content of that world. Where it breaks down is in the fact that the SL community is no single-minded thing where everyone must conform to some totalitarian’s version of the truth, nor do I think any online world hoping to keep people interested indefinitely ever should be or could be.

    ‘the self as “dynamic” is fascist because it implies that the individual isn’t himself, isn’t real, isn’t whole, isn’t sovereign’.

    The self is a pattern that is reasonably consistent. It is not some immutable object that can never change, but nor is it totally chaotic and ‘noisy’. It is somewhere between those two extremes.

    ‘If you have to explain the problem of the individual and fascism at this basic a level, you can’t talk to a person normally, as they are not speaking in good faith, or are so abstracted from common sense as to be really delusional. I think in your case, it’s more the latter’.

    In my experience, when people say ‘this is true’ or ‘this is wrong’, they really mean ‘this does (or does not) conform to my prejudices’. Common sense evolved to model a very tiny slither of reality, but the sciences I am interested in routinely pushes past our mind’s comfort zone. Of course, when we try to piece together a picture of what is going on at this deeper level of reality, it all looks crazy and in violation of common sense. To me, though, the crazy person is the person who believes their common sense view of reality is a perfect model of how reality actually operates. That is the one true delusion that a person can be prone to.

    ‘Good bye.’

    Byeee:)

  • Extropia DaSilva

    Thought I might include a couple of quotes from articles posted recently ‘Technology Review’:

    Google’s Sergey Brin is quoted as saying “Perfect search requires human-level artificial intelligence, which many of us believe is still quite distant. However, I think it will soon be possible to have a search engine that ‘understands’ more of the queries and documents than we do today. Others claim to have accomplished this, and Google’s systems have more smarts behind the curtains than may be apparent from the outside, but the field as a whole is still shy of where I would have expected it to be”, which agrees with my assessment that search software will strive toward AGI.

    In the article “Cell Phone That Listens And Learns” we are told, “a group at Dartmouth College, in Hanover, NH, has created software that uses the microphone on a cell phone to track and interpret a user’s activity…In testing, the SoundSense software was able to correctly determine when the user was in a particular coffee shop, walking outside, brushing her teeth, cycling, and driving in the car. It also picked up the noise of an ATM machine and a fan in a particular room”. Here we see another step towards a better understanding of ‘what you are doing’, one of key requirements of improving search software and artificial intelligence.

    So, as the Emperor said in ‘Return Of The Jedi’, “everything is proceeding as I have forseen. Mwahahahaha!”