Earlier, we talked about the partially successful predictions of science fiction. Futurists are not always successful, though, and there are two kinds of failure that really stand out in hindsight. The computer industry has fallen foul of both, in its time.

One form of failure is to drastically underestimate the rate of advancement and the usefulness in everyday life that a technology will have. In the 1940s, IBM chairman Thomas Watson took Grosch’s Law (named after fellow IBM employee Herbert Grosch, it states ‘computer power rises by the square of the price’. That is, the more costly a computer, the better its price-performance ratio) to mean the total global market was ‘maybe five computers’. This, bare in mind, was back in the days when computers were room-filling behemoths, based on vacuum tubes. The Integrated circuit that forms the heart of all modern computers did not become commercially available until 1968. In 1965, the inventor of the integrated circuit, Gordon Moore, took the annual doubling of the number of transistors that could be fitted onto an integrated circuit and predicted, ‘by 1975, economics may dictate squeezing as many as 65,000 components onto a single sillicon chip’.

The integrated circuit lead to desktop personal computers. These inexpensive commodities were thousands of times more cost-effective than mainframes and they dealt Grosch’s Law a decisive defeat. Today, ‘Moore’s Law’ and its prediction that a fixed price buys double the computing power in 18 month’s time has become something of an industry given and has defied every forcecast of its demise. The naysayers first heralded the end of Moore’s Law in the mid 1970s when integrated circuits held around 10,000 components and their finest details were around 3 micrometers in size. In order to advance much further, a great many problems needed to be overcome and experienced engineers were worrying in print that they might be insurmountable. It was also in the 1970s that Digital Equipment Corporation’s president, Ken Alson claimed, ‘there’s no reason for individuals to have a computer in their home’.

Obviously, such pessimism was unfounded. By 2004, the feature size of integrated circuit gates had shrunk to around 50 nanometers and we talk about billions of components, rather than tens of thousands.  Millions of PCs had been sold worldwide by 2002, even the Amish maintain a website and we somehow went from a time in the 60s when nobody bar a few thousand scientists would have noticed if all the world’s computers stopped working, to a society that would grind to a halt if all computers stopped working.

The other form of failure stands in direct contrast to the gaff of drastically underestimating the growth of a technology. That is, a technology that fails rather completely to live up to its promise. A particularly infamous example is robotics. The artificial intelligence movement was founded in 1950 and it was believed that within a decade or two, versatile, mobile, autonomous robots would have eliminated drudgery in our lives. By 1979, the state-of-the-art in mobile robotics fell way short of the requisite capabilities. A robot built by Stanford university (known as ‘Cart’) took 5 hours to navigate its way through a 30-metre obstacle course, getting lost about one crossing in four. Robot control systems took hours to find and pick up a few blocks on a tabletop. Far from being competent enough to replace adults in manufacturing and service industries, in terms of navigation, perception and object manipulation, robots were being far outperformed by toddlers. Even by 2002, military-funded research on autonomous robot vehicles had produced only a few slow and clumsy prototypes.

Can we identify the reasons why experts came up with such wildly inaccurate predictions? In all likelihood, they were lead astray by the natural ability of computers. The first generation of AI research was inspired by computers that calculated like thousands of mathematicians, surpassing humans in arithmetic and rote memorization. Such machines were hailed as ‘giant brains’, a term that threatened to jeapordize computer sales in the 1950s as public fears concerning these ‘giant brains’ taking over took hold. It was this distrust that lead IBM’s marketing department to promote the slogan ‘computers do only what their programs specify’, and the implication that humans remain ultimately in control is still held to be a truism by many today (despite being ever-more untrue, given the increased levels of abstraction that modern programs force us to work at, requiring us to entrust ever-larger details to automated systems ). Because computers were outperforming adults in such high mental abilities as mathematics, it seemed reasonable to assume that they would quickly master those abilities that any healthy child can do.

We seem to navigate our environment, identify objects and grab hold of things without much mental effort, but this ease is an illusion. Over hundreds of millions of years, Darwinian evolution fine-tuned animal brains to become highly organized for perception and action. Through the 70s and 80s, the computers readily available to robotics research were capable of executing about 1 MIPS. On 1 MIPS computers, single images cram memory, require seconds to scan and serious image analysis takes hours. Animal vision performs far more eleborate functions many times a second. In short, just because animals make perception and action seem easy, that does not mean the underlying information processing is simplistic.

One can imagine a mad computer designer rewiring the neurons in a fly’s vision and motor system so that they perform as arithmetic circuits. Suitable optimised, the fly’s brain would match or even surpass the mathematical prowess of computers and the illusion of computing power would be exposed. The field of cybernetics actually attempted something similar to this. But, rather than rewire an animal brain so that it functioned like a computer, they did the opposite and used computers to copy the nervous system by imitating its physical structure. By the 1980s, computers could simulate assemblies of neurons, but the maximum number of neurons that could be simulated was only a few thousand. This was insufficient to match the number of neurons in an insect brain (a housefly has 100,000 neurons). We now think that it would take at least 100 MIPS to match the mental power of a housefly. The computers readily available to robotics research did not surpass 10 MIPS until the 1990s.

Because they had the mental ability of insects, robots from 1950-1990 performed like insects, at least in some ways. Just as ants follow scent trails, industrial robots followed pre-arranged routes. With their insect-like mental powers, they were able to track a few handpicked objects but, as Hans Moravec commented, ‘such robots are easily confused by minor surprises such as shifted bar codes or blocked corridors (not unlike ants thrown off a scent trail or a moth that has mistaken a street light for the moon)’.

Insects adopted the evolutionary strategy of routinely engaging in pretty stupid behaviour, but existing in such numbers that at least some are fortunate enough to survive long enough to procreate. Obviously, such a strategy is hardly viable for robots. No company could afford to routinely replace robots that fall down stairs or wedge themselves in corners. It is also not practical to run a manufacturing system if route changing requires expensive and time-consuming work by specialists of inconsistent availability. The mobile robotics industry has learned what their machines need to do if they are to become commercially viable. It needs to be possible to unpack them anywhere, and simply train them by leading them once through their tasks. Thus trained, the robot must perform flawlessly for at least six months. We now know that, at the very least, it would require one thousand MIPS computers- mental matches for the tiniest lizards- to drive reliable mobile robots.

It would be a mistake to think that matching our abilities requires nothing more than sufficient computing power. Although computers were hailed as ‘giant brains’, neuroscience has since determined that, in many ways, brains are not like computers. For instance, whereas the switching units in conventional computers have around three connections, neurons have thousands. Also, computer processors execute a series of instructions in consecutive order, an architecture known as serial processing. But with the brain, a problem is broken up into many pieces, each of which is tackled separately by its own processor, after which results are integrated to get a general result. This is known as parallel processing.

The differences between brains and computers are by no means restricted to the examples I gave. But, it needs to be understood that these differences need not be fundamental. Computers have gone through radical redesigns in the past (think of the sillicon chip replacing vacuum tubes) and such a change can happen again. As Joe Tsien explained, ‘we and other computer engineers are beginning to apply what we have learned about the organization of the brain’s memory system to the design of an entirely new generation of intelligent computers’.

From that quote, one might conclude that Professor Tsien’s expertise lies predominantly in computer science. Actually, he is professor of pharmacology and biomedical engineering, director of the Centre for Systems Neurobiology at Boston University and founder of the Shanghai Institute of Brain Functional Genomics. Why professor Tsien should be interested in computer engineering becomes clear when you consider how neuroscience, computers and AI are beginning to intersect. Those remote scanning systems that cognitive neuroscience use to examine brain function require high-power computers. The better the computer is, the more detailed the brain scans will be. Knowledge gained from such brain scans leads to a better idea of how brains function, which can be applied to make more powerful computers. Gregory S. Paul and Earl Cox explained that ‘the investigative power of the combination of remote scanning and computer modelling cannot be exaggerated. It helps force neuroscientists to propose rigorously testable hypotheses that can be checked out in simplified form on a neural network such as a computer…We learned half of what we know about brains in the last decade as our ability to image brains in real time has improved keeping in step with the sophistication of brain scanning computers’.

It would be wrong to imply that the fMRI scanner is all that is required to reverse-engineer a brain. These machines help us pinpoint which areas of the brain are associated with which mental abilities, but they cannot show us how the brain performs those tasks. This is partly because current brain-scanning devices have a spatial resolution of one milimetre, but the axonal and dendritic processes comprising the brain’s basic neuronal circuits are so fine that only electron microscopy of 50 nanometer serial sections can resolve their connectivity. It is often said that the human brain is the most complex object in the known Universe. The complexity of brains becomes apparent when you realise that mapping the neuronal network of the nematode worm took ten years, despite the fact that its whole brain is only 0.01 nm^3 in volume. As you can imagine, mapping 500 trillion synaptic connections between the 100 billion neurons in the human brain is a far greater challenge. However, the task is made ever-less difficult as we invent and improve tools to aid in the job of reverse-engineering the brain. In the past few years, we have seen the development of such things as:

A technique developed at Harvard University for synthesizing large arrays of silicon nanowires. With these, it’s possible to detect electrical signals from as many as 50 places in a single neuron, whereas before we were only able to pick up one or two signals from a neuron. The ability to detect electrical activity in many places along a neuron helps improve our knowledge of how a neuron processes and acts on incoming signals from other cells.

A ‘Patch Clamp Robot’ has been developed by IBM to automate the job of collecting data that is used to construct precise maps of ion channels and to figure other details necessary for the accurate simulation of brain cells. This robot is able to do about 30 years-worth of manual lab work in about 6 months.

An ‘Automatic Tape-Collecting Lathe Ultramicrotome’ (ATLUM) has been developed to ‘aid in the efficient nanoscale imaging over large (tens of cubic millimetres) volumes of brain tissue. Scanning electron microscope images of these sections can attain sufficient resolution to identify and trace all circuit activity’. ATLUM is currently only able to map entire insect brains or single cortical columns in mamallian brains (a cortical column is the basic computational unit of the brain) but anticipated advances in such tools will exponentially increase the volume of brain tissue we can map.

Together with such things as automated random-access nanoscale imaging, intelligent neuronal tracing algorithms and in-vivo cellular resolution imaging of neuronal activity, we have a suite of tools for overlaying the activity patterns within brain regions on a detailed map of the synaptic circuitry within that region. Although we still lack a way to bring together the bits of what we know into an overarching theory of how the brain works, we have seen advances in the understanding of brain function lead to such things as:

Professor Tsien and his colleagues discovery of what may be the basic mechanism the brain uses to convert collections of electrical impulses into perception, memory, knowledge and behaviour. Moreover, they are developing methods to convert this so-called ‘universal neural code’ into a language that can be read by computers. According to Professor Tsien, this research may lead to ‘seamless brain-machine interfaces, a whole new generation of smart robots’ and the ability to ‘download memories and thoughts directly into computers’.

Work conducted by MIT’s Department of Brain and Cognitive Sciences has lead to a greater understanding of how the brain breaks down a problem in such a way so that the finished pieces can be seamlessly recombined (the challenge of successfully performing this step has been one of the stumbling blocks in using parallel processing in computers). This work has lead to a general-vision program that can perform immediate-recognition, the simplest case of general object recognition. Immediate-recognition is typically tested with something called the ‘animal absence/presence test’. This involves a test subject being shown a series of pictures in very rapid succession (a few tenths of a second for each photo) and trying to determine if there is an animal present in any of them. When the program took this test along with human subjects, it gave the right answer 82% of the time, whereas the people were correct 80% of the time. This was the first time a general-vision program has performed on a parr with humans.

IBM’s Blue Brain Project has built a supercomputer comprised of 2,000 microchips, each of which has been designed to work just like a real neuron in a real brain. The computer is currently able to simulate a neocortical column. It achieves this by simulating the particular details of our ion channels and so, just like a real brain, the behaviour of Blue Brain naturally emerges from its molecular parts. According to Henry Markham (director of Blue Brain), ‘this is the first model of the brain that has been built from the bottom up…totally biologically accurate’. His team expect to be able to accurately model a complete rat brain by some time around 2010 and plan to test the model by downloading it into a robot rat whose behaviour will be studied alongside real rats.

In late 2007, Hans Moravec’s company Seegrid ‘had load-pulling and factory “tugger robots” that, on command, autonomously follow routes learned in a single human-guided walkthrough’.

Many of the hardware limitations (and some of the software issues) that hampered mobile robots in the past have been overcome. Since the 1990s, computer power for controlling a research robot shot through 100 MIPS and has reached 50,000 MIPS in some high-end desktop computers. Laser range finders that precisely measure distance  and which cost roughly $10,000 a few years ago can now be bought for about $2000. At the same time, the basic building blocks of perception and behaviour that serve animals so well have been reverse-engineered.

Print Friendly, PDF & Email