THE HUMAN COMPUTING LAYER.
Organizing data, metadata and links between data nodes is obviously not the only way of leveraging the human intelligence embedded in the Web. We have long been used to harnessing the unused processing power of millions of individual PCs for distributed computing projects like Seti@home, and now ‘Crowdsourcing’ uses the Internet to form distributed labour networks exploiting the spare processing power of millions of human brains. The cost barriers that once separated the professional from the amateur have been eroded thanks to technological advances in everything from product design software to digital video cameras, and companies as disparate as pharmaceuticals and TV are taking advantage of hobbyists, part-timers and dabblers, with no need to worry about where they are, so long as they are connected to the network. This pool of cheap labour is fast becoming a real necessity, according to Larry Huston, Proctor and Gamble’s vice president of innovation and knowledge: ‘Every year research budgets increase at a rate faster than sales. The current R&D model is broken (but now) we have up to 1.5 million researchers working through our external networks’. Those external networks come in the form of websites like Amazon Mechanical Turk, which helps companies find people with a few minutes to spare to perform tasks computers are lousy at (identifying items in a photograph, perhaps). YourEncore helps companies find and hire retired scientists for one-off assignments and on innoCentive, problems are posted and anyone on the network can have a go at solving them. Invariably, any open call for submissions will illicit far more junk than it will genuinely useful answers. In fact, a rule of crowdsourcing states ‘the crowd produces mostly crap’. But then there is the rule ‘the crowd is full of specialists’, meaning people with the ‘right stuff’ to actually solve the problem. Just what counts as the right stuff varies from website to website. The tasks posted on Mechanical Turk could be taken on by anyone with basic literacy skills. On the other hand, sites like iConclude require professional expertise (in this case, expertise in troubleshooting server software). In all cases, the dispersed workforce need to be able to complete the job remotely, and the task cannot be too big because what crowdsourcing mostly taps into are those spare moments people have. Well, the OVERALL job may well be immense, such as compiling an online encyclopedia with tens of millions of entries. It doesn’t matter so long as the task can be divided up into micro chunks that people can have a go at if they have the time and inclination. Such tasks might involve correcting errors. Wikipedia enthusiasts quite enjoy ferreting out and fixing inaccuracies that appear on the encyclopedia. That’s one way to get around the problem of sorting the gems from the junk — let the crowd collectively hunt down the best material and correct/ eliminate the garbage. Another way is to install cheap, effective filters to separate the wheat from the chaff. But mostly the cost-effectiveness lies in the fact that the correct solution can be bought for a fraction of the cost it would take for an in-house R&D team to come up with the same solution, and that team would expect payment regardless of whether they solved the problem or not. In contrast, the crowd of ‘solvers’ on innoCentive are happy to provide services, knowing full well that if their solution is not selected they earn absolutely nothing.
One can well imagine how crowdsourcing will become more powerful as the technologies that offer ways to help computers organize online data improve. After all, the more capable the Web is at supplying answers to questions, or at tracking down that piece of information you require right now, or at bringing together the right combination of minds to collaborate on a problem, the more complex the puzzle that can be solved in the same timeframe. Also, more capable tools could shorten the amount of time required. One such example is ‘NanoEngineer 1’ an open-source CAD package for the design and modelling of atomically-precise components and assemblies. According to Damien Gregory Allis,
prior to NE1, any one of my images involved 3-5 hours… to make sure things with surfaces were within Van der Waals radii contacts, etc. I remember vividly the 1st NE1 board meeting where Mark Sims, in 30 seconds, had generated a Drexler/Merkle bearing from 2 repeater units.
The omninet, then, could be a tremendously powerful driver of education, training both human and machine intelligence to tackle increasingly complex problems. A common criticism regarding the prospect of artificial intelligence is to point out that humanity has been trying to build robots etc. that behave like people for a very long time and so far has had little success. Apparently, this is supposed to be adequate justification for believing the goal is fundamentally unreachable. One might imagine, though, that the extremely limited computational resources that AI researchers had to work with hampered their chances of success. A more important reason was the sheer lack of data concerning the thing they were trying to model — namely, the human brain.