Past experience tells us that we should expect mobile robots with all the capabilities of people only after generations of machines that match the capabilities of less complex animals. Hans Moravec outlined four generations of ‘universal robots’, beginning with those whose mental power matches lizards. The comparison with animals is only meant to be a rough analogy. Nobody is suggesting that robots with the mental capabilities of monkeys are going to swing from your light fittings going ‘oo,oo,oo’…

1st-gen robots have onboard computers whose processing power will be about 3,000 MIPS. These machines will be direct descendents of robots like Roomba (an autonomous vacuum cleaner) or even people-operated vehicles like forklift trucks (which can be adapted for autonomy). Whereas Roomba moves randomly and can sense only immediate obstacles, 1st-gens will have sufficient processing power to build photo realistic 3D maps of their surroundings. They will seem to have genuine awareness of their circumstances, able to see, map and explore their work places and perform tasks reliably for months on end. But they will only have enough processing power to handle contingencies explicitly covered in their application programs. Except for specialized episodes like recording a new cleaning route (which, as mentioned earlier, should ideally require nothing more complicated than a single human-guided walkthrough), they will be incapable of learning new skills; of adapting to new circumstances. Any impression of intelligence will quickly evaoprate as their responses are never seen to vary.

2nd – generation robots will have 100,000 MIPS at their disposal, giving them the mental power of mice. This extra power will be used to endow them with ‘adaptive learning’. In other words, their programs will provide alternative ways to accomplish steps in a task. For any particular job, some alternatives will be preferable to others. For instance, a way of gripping one kind of object may not work for other kinds of object. 2nd-gens will therefore also require ‘conditioning modules’ that re-inforce positive behaviour (such as finding ways to clean a house more efficiently) and weed out negative outcomes (such as breaking things).

Such robots could behave in dangerous ways if they were expected to learn about the physical world entirely through trial and error. It would obviously be unacceptable to have your robotic housekeeper throw a bucket of water over your electrical appliances as it learns effective and ineffective ways to spruce them up. Moravec suggests using supercomputers to provide simulated environments for such robots to learn in. It would not be possible to simulate the everyday world in full physical detail, but approximations could be built up by generalizing data collected from actual robots. According to Moravec, ‘a propper simulator would contain at least thousands of learned models for various basic actions, in what amounts to a robotic version of common-sense physics…Repeatedly, conditioning suites that produced particularly safe and effective work would be saved, modified slightly and tried again. Those that do poorly would be discarded’.

2nd-gens will therefore come pre-installed with the knowledge that water and electrical appliances do not mix, that glass is a fragile material and so on, thereby ensuring that they learn about the world around them without endangering property or lives. They will adjust to their workplaces in thousands of subtle ways, thereby improving performance over time. To a limited extent, they will appear to have likes and dislikes and be motivated to seek the first and avoid the second. But they will seem no smarter than a small mammal outside the specific skills built into their application program of the moment.

3rd-generation robots will have onboard computers as powerful as the supercomputers that optimised 2nd-gen robots- roughly a monkey-scale 3,000,000 MIPS. This will enable the 3D maps of robots’ environments to be transformed into perception models, giving 3rd-gens the ability to not only observe its world but to also build a working simulation of it. 2nd-gens would make all their mistakes in real life, but by running its simulation slightly faster than realtime, a 3rd-gen could mentally train for a new task, alter its intent if the simulation results in a negative outcome, and will probably succeed physically on the first attempt. With their monkey-scale intelligence, 3rd-gens will probably be able to observe a task being performed by another person or robot and learn to imitate it by formulating a program for doing the task itself. However, a 3rd-gen will not have sufficient information or processing power to simulate itself in detail. Because of this, they will seem simple-minded in comparison to people, concerned only with concrete situations and people in its work area.

4th-generation robots will have a processing power of 100 million MIPS, which Moravec estimates to be sufficient for human-scale intelligence. They will not only be able to run simulations of the world, but also to reason about the simulation. They will be able to understand natural language as well as humans, and will be blessed with many of our perceptual and motor abilities. Moravec says that 4th-gens ‘will be able to accept statements of purpose from humans (such as ‘make more robots’) and “compile” them into detailed programs that accomplish the task’.


A short answer to the question, ‘what defines a 4th-gen robot’ might be ‘they are machines with the general competence of humans’. However, it may not be the case that 4th gens will have all of the capabilities of people. Today, technical limitations are the reason why mobile robots cannot match humans in terms of motor control, perceptual awareness, judgement and emotion- we simply don’t yet know how to build robots that can do those things. In the future, we may know how to build such robots but for various reasons may decide not to equip them with the full range of human capabilities. For instance, whereas a human has natural survival instincts and a distaste of slavery, robots may be designed so that they want to serve more than survive. This is certainly not unprecedented in nature. In the animal kingdom we find examples of individuals motivated to serve more than survive, with the worker castes of social insects being a good case-in-point.

Bill Gates wrote, ‘I can envision a future in which robotic devices will become a nearly ubiquitous part of our day-to-day lives. I believe that technologies such as distributed computing, voice and visual recognition, and wireless broadband connectivity will open the door to a new generation of autonomous devices that will enable computers to perform in the physical world on our behalf. We may be on the verge of a new era, when the pc will get off the desktop and allow us to see, hear, touch and manipulate objects where we are not physically present’.

‘A robot in every home’ sounds similar to Gates and Paul Allen’s dream of ‘a computer in every home’. But the impact that mobile robots might have on our lives could be even more profound. Computers have changed the world in ways that few people anticipated, but in many ways they exist in a separate to that in which we live our lives. As we have seen, this is because machine intelligence has not had the ability to act autonomously in physical space, instead finding strengths in mathematical space. But, if the problems of motor control, perceptual awareness and reasoning are overcome, it might be possible for robots to run society without us, not only performing all productive work but also making all managerial and research/development decisions.

This leads to the question, ‘why would we surrender so much control to our machines?’. Perhaps we won’t. But, according to Joseph Tainter, who is an archeologist and author of ‘The Collapse Of Complex Societies’, ‘for the past 100,000 years, problem solving has produced increasing complexity in human societies’. Every solution ultimately generates new problems. Success at producing larger crop yields leads to a bigger population. This in turn increases the need for more irrigation canals to ensure crops won’t fail due to patchy rain. But too many canals makes ad-hoc repairs infeasible, and so a management beauracracy needs to be set up, along with some kind of taxation to pay for it. The population keeps growing, the resources that need to be managed and the information that needs to be processed grows and diversifies, which in turn leads to more kinds of specialists. According to Tainter, sooner or later ‘a point is reached when all the energy and resources available to a society are required just to maintain its existing levels of complexity’.

Once such a point is reached, a paradigm shift in the organization of hierarchies becomes inevitable. Yaneer Bar-Yam, who heads the New England Complex Systems Institute in Cambridge Massachusets explained that ‘to run a hierarchy, managers cannot be less complex than the systems they are managing’. Rising complexity requires societies to add more and more layers of management. In a hierarchy, there ultimately has to be an individual who can get their head around the whole thing, but eventually this starts to become impossible. When this point is reached, hierarchies give way to networks in which decision-making is distributed. In ‘Molecular Nanotechnology And The World System, Thomas McCarthy wrote, ‘as global markets expand and specialization increases, it is becoming the case that many products are available from only one country, and in some cases only one company in that country…Whole industries may be brought to their knees without access to a crucial part’.

This is the logical extreme of international trade; of division of labour: Dependence on other nations leading to networked civilizations that become increasingly tightly coupled. ‘The intricate networks that tightly connect us together’, said the Political Scientist Thomas Homer-Dixon, ‘amplify and transmit any shock’. In other words, the interconnectedness of the global system reaches a point where a breakdown anywhere means a breakdown everywhere.

But the benefits we get from the division of labour becomes truly profound only when the group of workers trading their goods and services becomes very large. As McCarthy pointed out, ‘it is no coincidence that the dramatic increase in world living standards that followed the end of the Second World War was concurrent with the dramatic increase in international trade made possible by the liberal post-war trading regime; improved standards of living are the result of more trade, because more trade has meant a greater division of labour and thus better, cheaper products and services’.

This is something that customers have come to expect- the right to choose better goods at lower prices. So long as this attitude exists, rising productivity will remain a business imperative. Therefore, output per worker must increase, and so the amount of essential labour decreases. Mechanization and automation have increased productivity, but apart from highly structured environments such as those found in car assembly plants, machines have required human direction and assistance. But, mobile robots are advancing on all fronts, and they represent a solution to the problem of managing complex networked civilizations, while at the same time shrinking the human component of competitive business.

In ‘The New Luddite Challenge’, Ted Kaczynski argued that ‘as society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them…a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage, the machines will be in effective control’. Admittedly, the poor performance of mobile robots in the past does invite skepticism of this idea of a totally automated society. But, no matter how unlikely the idea of truly intelligent, autonomous robots may seem, the prospect of humans being engineered to match the advantages of machines is even more infeasible. A robot worker would have unwavering attention, would perform its task with maximum efficiency over and over again, and would never ask for holidays or even a wage-packet at the end of the day (but the need for maintainence will mean something like sick leave still exists). It is just inconceivable that people could be coerced into working 24/7 for no pay, but with robots every nuance of their motivation is a design choice. Provided the problems of spatial awareness and object recognition/handling can be solved, and especially if artificial general intelligence is ever achieved, there seems to be no reason why capable robots wouldn’t displace human labour so broadly that the average workday would have to drop to zero.

Print Friendly, PDF & Email