EVOLUTION IN SILICON
This is just as true today as it ever was, but something is new. Our most complex technologies have reached a point where we must look to the principles of nature to make or govern them.
Since the 1980s, an automated design process with obvious parallels to the natural world emerged: ‘evolutionary algorithms’ (EA). Whereas a human designer does not have time to test all possible combinations, an EA comes closer to doing so, thanks to the sheer speed with which computers can explore mathematical space. In the past, their use was limited to a few niche applications, because the need to breed and evaluate thousands of generations requires ultra-fast computers. But , as EA pioneer John Koza explained:
“We can now undertake evolutionary problems that were once too complicated or time-consuming. Things we couldn’t do in the past, because it would have taken two months to run the genetic program, are now possible in days or less”.
Because the code that results from this evolutionary process is so different from conventional code, programmers find it impossible to follow. The complexity is beyond human capability to design or debug. This is something unprecedented in the scientific era of invention. While the layperson may use technology without understanding it, we at least expect the professional to have complete knowledge of how their designs function. But now that our most complex technologies are grown and trained rather than designed and built, they have to be studied as we now study nature: by probing and experimenting.
The two trends of the falling cost of transistors and increasing numbers of components on a chip, combine to drive Moore’s Law. As the trend progresses, it becomes possible to integrate computers and sensors into more and more objects. This enables us to capture new kinds of data, more accurately, and, thanks to wireless networks and telecommunication, the newly-sensed data is available anywhere in realtime, ready to be combined in almost limitless ways to produce new products and services.
Information has become codified and information technology modularized. Because of this we can install upgrades and add-ons and plug-ins much more quickly. Indeed, software upgrades can happen automatically, without you needing to be aware it is happening. This all allows innovation to spread faster than ever before. But, the complexity of our networks are such that we cannot anticipate when some newly made connection will cause an instability, and the speed of innovation plus the “code is code” principle of IT allows a competitor to arise from anywhere and quickly spread to be a threat, even to seemingly unrelated organizations. Volatility and the cost of managing it both call out for a bottom-up, adaptive approach. Kevin Kelly observed:
“We find you don’t need experts to control things. Dumb people, who are not as smart in politics, can govern things better if all the lower rungs, the bottom, is connected together. And so the real change that’s happening is that we’re bringing technology to connect lots of dumb things together”.
Not surprisingly, then, that social insects have been studied for insights into how a highly organized whole might emerge from the numerous activities of parts, each with its own agenda. Methods for rerouting network traffic in busy communications (based on the way ants forage for food) and better ways of organizing assembly lines in factories (based on division of labour among bees are just a few examples of systems reliant on emergence.
The increasing capability and falling cost of networked sensors and microprocessors is approaching a point where we can give any product the ability to sense environmental changes and respond appropriately to them. As we strive to create technologies and organizations that adapt continually and rapidly in order to keep pace with shifts in their market; as technology is organized into networks of systems that sense, respond, and configure themselves in appropriate ways, will “it’s alive” become more than mere metaphor?