Google and The Red Queen – An Essay By Extropia DaSilva

THE SELECTION PRESSURES

There are other ways in which natural selection and technological evolution differ, but let us not dwell on that. It is time to start talking about where search engines are headed.  The first question we need to look into, then, is this: What is the environment that search engines are trying to adapt to? Answer: They exist within the accumulated store of human culture.

Another question: What provides the selection pressure that drives the evolution of more effective search software? The answer is that knowledge comes in two forms. There is ‘high-level knowledge’ and there is ‘low-level information’.

High-level knowledge refers to information that is relevant to an individual or group at any given moment. Low-level information is obviously that which is currently not relevant. Equally obviously, high-level knowledge is vastly outnumbered by low-level information. You want to visit only a handful of the billions of websites that make up the Web. There is a photo on Flickr that you are interested in, and many millions of others that do not interest you right now. How do you find what you need amongst all that junk? You rely on search engines.

Philosophers separate knowledge into ‘knowing that’ and ‘knowing how’. I know THAT Mount Everest is 8848 meters high. I know HOW to find out how tall Mount Everest is by using Google. Contemporary search engines are well on their way to nailing ‘knowing that’ — or at least giving the impression of having this capability. Try it. Ask Google questions along the lines of ‘how high’, ‘how fast’, ‘who said’. The chances are excellent that the right answer will be found in the synopsis of the top ten links.

But, when it comes to ‘knowing how’, search software lags behind us. You and I understand the meaning of words. We know how to read. If a search engine could read, when we asked a question it could look through millions of websites at electronic speed and then tell us what we want to know. I do not mean it would retrieve websites that contain the right information, leaving us to look for it among all the other stuff on that site that probably does not interest us. I mean it would extract the relevant information and give it to us.

Nowadays, the Web has a lot more than text stored on it. There are also audio files, video footage and photos. Something like Flickr highlights ways in which computers are good at some kinds of search, while humans are currently better at others. Imagine a person looking through a box that contains a million photos, while at the same time search software looks through a million flickr images. It would be no contest: The computer would be millions of times faster when it comes to finding a particular image.

But now imagine that you have this photo, and both computer and human are asked to identify objects within that image. Over many millions of years, natural selection favoured brains that were effective at recognising certain patterns. People are superbly adapted to the task of understanding speech patterns, identifying objects, inferring emotion from body language and facial expressions and many other tasks that computers and robots are still pretty bad at.

Print Friendly, PDF & Email