Four years after Darwin published ‘On The Origin Of Species’, Samuel Butler was calling for a theory of evolution for machines. Most attempts at such a theory have tried to frame it in terms of the steady accumulation of changes, recognisable as Darwinian.
But natural selection has certain limitations. For one thing, a new species can only be created through incremental steps. What is more, each step must result in a viable life form. Technology need not be so constrained. So where does that leave us in the search for an evolutionary theory for machines? It certainly does not mean there is no such thing, only that Darwinian selection is not always applicable. How, then, can we explain the appearance of anything that cannot have come about through the steady accumulation of changes to existing technologies?
We have to consider technology in its entirety — not just physical inventions but all processes, devices, modules, methods and algorithms that ever existed — in order to see a kind of evolution at work.
When we do that, we discover that the history of technology is by no means one of more-or-less independent discoveries. This is because any new technology can only come about by using components and methods that already exist. The jet engine, for example, was created by combining pre-existing technologies like compressors, turbines and combustion systems
W. Brian Arthur, who is a professor at the Santa Fe Institute, calls this ‘Combinatorial Evolution’. Some combinations prove useful, and so they persist and spread around the world, becoming potential building blocks for further technologies. Or, a time may come when they are clearly surpassed by other technologies and so they go extinct. Also, there are many possible combinations that make little sense, and they too become nothing.
Every invention stands upon a pyramid of others that made it possible. The history of technology is an evolutionary story of related devices, methods, and exploitations of natural phenomena. It results from people taking what is known at the time, plus a modicum of inspiration, and then combining bits and pieces that already exist in just the right way in order to link some need with some effect that can fulfil it.
THE TECHNOLOGY TRAP
Technological evolution seems to go hand in hand with the accumulation and refinement of knowledge. But when we consider people as individuals we find a great deal of ignorance concerning the fine details of how modern societies function. We all use technologies as if we understand them, when in fact we are largely ignorant.
How has this come about? Suppose our nomadic ancestors were fortunate enough to discover rich and fertile land, and developed technologies to exploit such resources, meeting the tribe’s basic needs. Doing so would have lead to prosperity, which in turn would have lead to population numbers rising.
But, that would have put more strain on the land’s ability to provide for the tribe, and technology would have had to become more sophisticated in order to continually satisfy basic need. Once the sophistication of technologies and the number of skills required to maintain a society reached a certain level, it would have begun to make sense for an individual to specialise in a few professions, because a person who concentrates on a few tasks all day becomes a far better at it than a generalist tends to be.
But, if a society of specialists is to function properly, there needs to be a way of organizing everyone. Systems of management become necessary, coordinating actions and delegating responsibility. Ever-growing stockpiles of building blocks manufactured in dedicated places require transportation. Each advance in transport and communication reduces the economic costs of recombination, making innovation ever less expensive.
Eventually, the need for networks of efficient channels of communication puts a pressure on the discovery of electromagnetic fields and how they can be used to send and receive digitized information. All objects in the digital domain become data strings that can be acted on in the same way, not even needing to share the same physical space for combinations to happen. Thus does technology become a system, a network of functionalities.
Once tribes have evolved into gigantic societies engaged in economic activities that span an entire planet, each city has become a technology island, totally dependent on networks of services bringing supplies from outside. Without such support, the city would die.
In that sense, our comfortable urban lives are technology traps. We are completely dependent upon technologies to maintain our way of life, while at the same time taking it all for granted. However, while it is true to say that we depend on technology for our survival, the extent to which technology depends upon people for its evolution may well become less and less as time goes by.
“CODE IS CODE”
Can we do this? Can we free technology to sense, diagnose and fix problems by itself, or even go as far as conceiving, designing, and building the next generation of technologies autonomously? Well, we are beginning to see technologies that enable us to discern life’s processes at their fundamental level. Also, our computers have reached the point where they are powerful enough to recreate many of these processes in-silico. We are starting to see how they work in inorganic settings.
Richard Dawkins once observed, “genetics today is pure information technology. This, precisely, is why an antifreeze gene can be copied from an arctic fish and pasted into a tomato”. Because life is fundamentally an information technology (in other words, something that operates on the basis of coded instructions) it can be translated into languages understandable to computers, which operate according to the same principles. Christopher Meyer and Stan Davies, who both work at the Centre For Business Innovation, coined the phrase “Code Is Code”, explaining:
“you can translate biology into information, and information into biology because both operate on the basis of coded instructions, and those codes are translatable. When you get down to it, code is simply code”.
OK, so we can do it, but why should we pursue what Kevin Kelly has called “out-of-controllness”? To understand why, it needs to be appreciated that potential building blocks do more than just make the next stage possible. They also drive the evolutionary process because of the changes they make, both directly and indirectly, to human life. James Burke put it like this:
“An invention acts rather like a trigger. Because, once it’s there it changes the way things are, and that change stimulates the production of another invention, which in turn causes more change and so on”.
So, technology that depended on a scientific understanding, and on the systematized use of natural phenomena, was both made possible and made necessary by the opportunities and challenges that previous generations of inventions helped bring about.
EVOLUTION IN SILICON
This is just as true today as it ever was, but something is new. Our most complex technologies have reached a point where we must look to the principles of nature to make or govern them.
Since the 1980s, an automated design process with obvious parallels to the natural world emerged: ‘evolutionary algorithms’ (EA). Whereas a human designer does not have time to test all possible combinations, an EA comes closer to doing so, thanks to the sheer speed with which computers can explore mathematical space. In the past, their use was limited to a few niche applications, because the need to breed and evaluate thousands of generations requires ultra-fast computers. But , as EA pioneer John Koza explained:
“We can now undertake evolutionary problems that were once too complicated or time-consuming. Things we couldn’t do in the past, because it would have taken two months to run the genetic program, are now possible in days or less”.
Because the code that results from this evolutionary process is so different from conventional code, programmers find it impossible to follow. The complexity is beyond human capability to design or debug. This is something unprecedented in the scientific era of invention. While the layperson may use technology without understanding it, we at least expect the professional to have complete knowledge of how their designs function. But now that our most complex technologies are grown and trained rather than designed and built, they have to be studied as we now study nature: by probing and experimenting.
The two trends of the falling cost of transistors and increasing numbers of components on a chip, combine to drive Moore’s Law. As the trend progresses, it becomes possible to integrate computers and sensors into more and more objects. This enables us to capture new kinds of data, more accurately, and, thanks to wireless networks and telecommunication, the newly-sensed data is available anywhere in realtime, ready to be combined in almost limitless ways to produce new products and services.
Information has become codified and information technology modularized. Because of this we can install upgrades and add-ons and plug-ins much more quickly. Indeed, software upgrades can happen automatically, without you needing to be aware it is happening. This all allows innovation to spread faster than ever before. But, the complexity of our networks are such that we cannot anticipate when some newly made connection will cause an instability, and the speed of innovation plus the “code is code” principle of IT allows a competitor to arise from anywhere and quickly spread to be a threat, even to seemingly unrelated organizations. Volatility and the cost of managing it both call out for a bottom-up, adaptive approach. Kevin Kelly observed:
“We find you don’t need experts to control things. Dumb people, who are not as smart in politics, can govern things better if all the lower rungs, the bottom, is connected together. And so the real change that’s happening is that we’re bringing technology to connect lots of dumb things together”.
Not surprisingly, then, that social insects have been studied for insights into how a highly organized whole might emerge from the numerous activities of parts, each with its own agenda. Methods for rerouting network traffic in busy communications (based on the way ants forage for food) and better ways of organizing assembly lines in factories (based on division of labour among bees are just a few examples of systems reliant on emergence.
The increasing capability and falling cost of networked sensors and microprocessors is approaching a point where we can give any product the ability to sense environmental changes and respond appropriately to them. As we strive to create technologies and organizations that adapt continually and rapidly in order to keep pace with shifts in their market; as technology is organized into networks of systems that sense, respond, and configure themselves in appropriate ways, will “it’s alive” become more than mere metaphor?
THE THIRD DIGITAL REVOLUTION
Claude Shannon digitized communication. Von Neumann did likewise for computers. So, will there be a third digital revolution? The answer, according to Neil Gershenfeld, is yes:
“It’s not when a program describes a thing, it’s when a program becomes a thing that we bring the programmability of the digital world to the physical world”.
In other words, we are looking to create the ability to take a description of an object, and then have it self-assemble from molecular or atomic building blocks. We are talking about machines with the capability to self-repair, and even self-replicate. What is more, our technologies are now beginning to sense the natural phenomena responsible thought processes in living brains. Neuroscientist Lloyd Watts pointed out that:
“Scientific advances are enabled by a technology advance that allows us to see what we have not been able to see before… we collectively know enough about our own brains, and have developed such advanced computing technology, that we can now seriously undertake the construction of a verifiable, real-time, high-resolution model of significant parts of our intelligence”.
But, if it is indeed true that the brain IS the mind, and it becomes possible to build brains, do we then have machines who think creatively? Would this lead to technology conceiving of, designing, and manufacturing its own next-generation? If so, how might the human/machine relationship be affected? In pondering questions like these, Carl Zimmer came up with the following:
“The Web is encircling the Earth… We have surrounded ourselves with a global brain, which taps into our own brains, an intellectual forest dependent on a hidden fungal network”.
FROM A DISTANCE…
Whether something appears to be a single entity or a collection of individuals, often depends on your perspective. On a molecular scale, a single cell is a collection of chemicals and molecules. On the macro scale, a vast number of different cell types appears as a single animal. Maybe it is the case that, seen from the perspective of space, our planet seems less like a vast number of people and technologies, and more like a single entity slowly teasing apart the laws of nature? It is as if, in at least one tiny corner of the cosmos, the Universe is organizing itself into patterns of matter/energy that pursue the directive, Temet Nosce — ‘Know Thyself’.