Google and The Red Queen – An Essay By Extropia DaSilva

As is so often the case, it is science fiction writers who have seen where this is headed. In ‘Accelerando’, Charles Stross wrote:

“They’ve got bandwidth coming out the wazoo, distributed engines running a bazillion inscrutable search tasks, and a whole slew of high-level agents that collectively form a large chunk of the society of mind that is their owner’s personality”.

Another example is Alastair Reynold’s ‘Revelation Space’: “Simply put, he arranged to have every subsequent second of his life monitored by recording systems… over the years the machines learned to predict his responses with astonishing accuracy”.

What we are heading for, in other words, are search engines that are artificial intelligences that contain your entire mind, or at least a theory of mind detailed enough to predict a person’s second-by-second needs most of the time.

From a digital person’s point of view, the digital intermediary’s increasingly fine-tuned model could enable a welcome shift in the levels of control that must be surrendered to humans. After all, the more effective the digital intermediary is at modelling the mind of any particular human, the less need there is to rely on meat brains to process our thoughts and feelings for us.

Eventually, the digital intermediary might have fine-tuned its theory of mind to the point where it can produce what Ben Goertzel has called ‘Digital Twins‘, described as “an AI-powered avatar (that acts) in virtual worlds on one’s behalf- embodying one’s ideas and preferences and (making) a reasonable emulation of the decisions one would make”.

Notice that Goertzel says ‘on one’s behalf’, implying that digital twins will be like personal assistants or colleagues uncannily tuned to your temperement, skills etc, but still servants to human masters. That is no doubt how such digital people will seem at first.

Of course, the question of just who is slave and who is master is not always clear-cut when it comes to technology. Sherry Turkle said it all with her comment, “you think you have an organizer, but in time your organizer has you”.

This is not really takeover via brute force, so common in science fiction film depictions of human/machine relationships, more like a soft takeover driven by the convenience of relinquishing some control to technology, freeing the mind to concentrate on other things.

So, we Google something for the umpteenth time rather than commit the information to memory. After all, it is much easier to run a search than it is to memorise pages of text. Doubtless, the refrain ‘why memorise when you can Google’ will only grow stronger as we move into an era of ubiquitous computing and our digital intermediaries are always on hand to remember it for us, wholesale.

And if we one day have access to software equivalents of the visual and audio cortex, would we similarly rely on technology to recall what name goes with what face, what sound goes with what object, or any other act of cognition you care to name? If the artificial equivalents of the visual cortex or whatever can be made to work faster and more reliably than their biological predecessors, why not?

The growth in computing power, famously charted by Moore’s Law, is likely to rise beyond the capacity of the human brain.  Just how far depends on whose theoretical designs you deem to be plausible. Eric Drexler has patented a nanomechanical computer with enough processing power to simulate one hundred thousand human brains in a cubic centimetre.

Hugo de Garis goes further,  saying we will one day be processing one bit per atom, thereby enabling handheld devices that are a million, million, million, million times more powerful.

Seth Lloyd’s ‘ultimate laptop’ requires converting the mass of a 2.2 pound object into energy and processing bits on every resulting photon, thereby producing the equivilent brain power of five billion trillion human civilisations.

Ok, even I would admit that last theoretical design is probably a bit implausible, but there does seem to be every reason to expect even handheld devices with significantly more processing capability than the human brain is blessed with. If that power can be coupled with technical knowhow that successfully emulates any example of cognition you care to name, who could then argue that the digital intermediary would not be something humans would come to rely on, more so than their own now comparatively feeble pattern-recognition capabilities?

Print Friendly, PDF & Email