Google and The Red Queen – An Essay By Extropia DaSilva

DIGITAL GAIA

How might pattern recognition capabilities like this be achieved? In Permutation City, Greg Egan suggested one possible approach:

“With a combination of scanners, every psychologically relevant detail of the brain could be read from the living organ — and duplicated on a sufficiently powerful computer. At first, only isolated neural pathways were modelled: Portions of the visual cortext of interest to designers of machine vision”.

There is actually quite a lot of real science to this fiction. Not so long ago, Technology Review ran an article called ‘The Brain Revealed’ which talked about a new imaging method known as ‘Diffusion Spectrum Imaging’. Aparrently, it “offers an unprecedented view of complex neural structures (that) could help explain the workings of the brain”.

Another example would be the research conducted at the ITAM technical institute in Mexico City. Software was designed that mimics the neurons that give rats a sense of place. When loaded with this software, a Sony AIBO was able to recognise places it had been, distinguish between locations that look alike, and determine its location when placed somewhere new.

IBM’s Blue Brain Project is taking the past 100 years’-worth of knowledge about the microstructure and workings of mammalian brains, using that information to reverse-engineer a software emulation of a brain down to the level of the molecules that make it up. Currently, the team have modelled a neocortical column and have recreated experimental results from real brains. The column is being integrated into a simulated animal in a simulated environment. The purpose of this is to observe detailed activities in the column as the ‘animal’ moves around space. Blue Brain’s director (Henry Markram) said, “it starts to learn things and remember things. We can actually see when it retrieves a memory, and where it comes from because we can trace back every activity of every molecule, every cell, every connection, and see how the memory was formed”.

Eugene M. Izikevich and Gerald M. Edelmen of the Neurosciences’ Institute have designed a detailed thalamacortical model. This is based on experimental data gathered from several species: Diffusion tensor imaging provided the data for global thalamacortical anatomy. In-vitro labelling and 3D reconstructions of single neurons of cat visual cortex provided cortical micro circuitry, and the model simulates neuron spikes that have been calibrated to reproduce known types of responses recorded in-vitro in rats. According to  Izikevich and Edelmen, this model “exhibited collective waves and oscillations…similar to those recorded in humans” and “simulated fMRI signals exhibited slow fronto-parietal multi-phase oscillations, as seen in humans”. It was also noted that the model exhibited brain activity that was not explicitly built in, but instead “emerged spontaneously as the result of interactions among anatomical and dynamic processes”.

This kind of thing is known as ‘neuromorphic modelling’. As the name suggests, the idea is to build software/ hardware that behaves very much like biological brains.  I will not say much more about this line of research, as I have covered it several times in my essays. Let us look at other ways in which computers may acquire the ability to perform human-like pattern-recognition capabilities.

Vernor Vinge made an interesting speculation when he suggested a ‘Digital Gaia’ scenario as one possible route to super intelligence: “The network of embedded microprocessors becomes sufficiently effective to be considered a superhuman being”.

There is an obvious analogy with the collective intelligence of an ant colony. The world’s leading authority on social insects — Edward Wilson — wrote, “a colony is a superorganism; an assembly of workers so tightly-knit… as to act as a single well-coordinated entity”.

Whenever emergence is mentioned, you can be fairly sure that ant colonies will be held up as a prime example of many simple parts collectively producing surprisingly complex outcomes.

Software designers are already looking to ant colonies for inspiration. Cell-phone messages are routed through networks using ‘ant algorithms’ that evolve the shortest route. And Wired guru Kevin Kelly forsees “hundreds of millions of miles of fiberoptic neurons linking billions of ant-smart chips embedded into manufactured products, buried in environmental sensors”.

When talking about ‘Digital Gaia’ we need to consider two things: hardware and software. On the hardware side of things, we need to consider Moore’s Law and Kurzweil’s Law Of Accelerating Returns. The latter is most famously described as ‘the amount of calculations per second that $1,000 buys doubles every 18-24 months’, but it can also be expressed as: ‘You can purchase the same amount of computing power for half the cost every 18-24 months’. Consider those chip-and-pin smart cards. By 2002 they had as much processing power as a 1980 Apple II. By 2010 they will have Pentium class power. Since the same amount of computing power can be bought for half the cost every 24 months or so, this leads to the possibility of incorporating powerful and once-expensive microprocessors into everyday objects.

Of course, hardware is only half of the story. What about software? I would like to quote at length from comments made by Nova Spivak, concerning the direction that the Web as a whole is taking:

“Web 3.0… will really be another push on the back end of the Web, upgrading the infrastructure and data on the Web, using technologies like the Semantic Web, and then many other technologies to make the Web more like a database to enable software to be smarter and more connected…

…Web 4.0…will start to be much more about the intelligence of the Web…we will start to do applications which can do smarter things, and there we’re thinking about intelligent agents, AI and so forth. But, instead of making very big apps, the apps will be thin because most of the intelligence they need will exist on the Web as metadata”.

One example of how networked sensors could aid technology in working collaboratively with humans is this experiment, which was conducted at MIT:

Researchers fitted a chair and a mouse with pressure sensors. This enabled the chair to ‘detect’ fidgeting and the mouse to ‘know’ when it was being tightly gripped. Furthermore, a web cam was watching the user to spot shaking of the head. Fidgiting, tightening the grip and shaking your head are all signs of frustration. The researchers were able to train software to recognise frustration with 79% accuracy and provide tuition feedback when needed.

Or think about how networked embedded microprocessors and metadata could be used to solve the problem of object recognition in robots. Every object might one day have a chip in it, telling a robot what it is and providing location, orientation and manipulation data that provides the robot with instructions on how to pick up something and use it properly.

‘Digital Gaia’ could also be used to help gather information about societies and individual people, which could then be used by search-engine companies to fine-tune their service. Usama Fayyad, Senior Vice President of Research at Yahoo, put it like this: “With more knowledge about where you are, what you are like, and what you are doing at the moment… the better we will be able to deliver relevant information when people need it”.

We can therefore expect a collaboration between designers of search software and designers of systems for gathering biometric information. A recent edition of BBC’s ‘Click’ technology program looked into technology that can identify a person from their particular way of walking. Aparrently, such information is admissible as evidence in British courts. You can imagine how Google might one day identify you walking through a shopping mall, and target advertisement at you. ‘Minority Report’, here we come!