SHADES OF GREY: An essay by Extropia DaSilva

GATHERING INFORMATION FOR BUILDING AI.

Things will be rather different as the Web expands outwards and merges with our physical environment. We will then be in the position to obtain tremendous amounts of data from all scales of human life. Starting at the widest viewpoint, a global network of discrete sensors will obtain information about the patterns of behaviour we exhibit as a species. This is not something that has to wait for ‘the future’ before it can begin, in fact our social behaviour is already being harnessed to provide insightful data. Tag-rich websites like Flickr and Del.icio.us allow users to create, share and modify their own systems of organization, and their collective activity results in data with structure usefully modelled on the way we think. It’s now generally accepted that the trend towards miniaturization will lead to further personalization for our computers, as they progress from desk or laptop devices to wearable items that wirelessly connect to the Web to access software applications, rather than store and run them as PCs do today. Because a person will be continually connected to the Web, it will be possible to obtain copious amounts of data concerning individual patterns of behaviour. Sensors will be able to record the tiniest details and smart software will use this information to tailor their services. For instance, we now know that those tiny movements the eyes always make (called ‘microsaccades’) are biased towards objects to which people are attracted, even if he or she is making efforts to avert the gaze. Today, companies like Microvision are working on eyewear that use lasers to draw images directly onto the retina for visual/augmented virtual reality. Perhaps that eyewear could also be equipped with sensors that monitor a person’s microsaccades and infer their object of interest. Another idea (one that’s actively being pursued by intel) is to use devices that can detect a person’s pitch, volume, tone and rate of speech. These change in predictable ways depending on our emotional state and social context. Even without understanding the meaning of the spoken words, monitoring and processing such audio information can reveal a lot about a person’s mind, situation and social network. Ultimately, the Omninet’s gaze may eventually focus right down on the workings of the brain itself, as biocompatible nanoscale transponders enable neuroscience to make millions of recordings per second of the brain as it processes information, thereby obtaining a working model of the brain performing its tasks. The Omninet’s sensors will be woven into all human biological networks, from the smallest scale of the brain’s neural net, to the largest networks of society itself. The Semantic Web will also create strong networks among the many scientific fields, furthering the collaboration that is essential for the task of coding general artificial intelligence.

In all probability, ‘artificial’ intelligence will get progressively more capable of performing more and more of our pattern-recognition-based forms of intelligence, as our technologically-enhanced ability to contribute to the growing knowledge-base enhances our understanding of the relevant principles. Still, for a while natural intelligence will possess abilities that AI still can’t cope with. It makes sense to tap into crowdsourcing and shunt out the parts of a problem that still require humans TO humans. Ubiquitous sensors and a semantic web will be just as useful for expanding our educational possibilities as it will be for building AI. We talked earlier about how crowdsourcing taps into the human network for workable solutions, but it’s worth remembering that even the many impracticable solutions contain useful information. For instance, they may reveal hidden prejudices and false assumptions that cloud our ability to ask the right questions. The Semantic Web would make it much easier to interlink any document from the roughest draft to the most polished final (but not necessarily accurate) article, and permanent access to an ever-present internet will provide a medium for capturing our ideas whenever inspiration strikes. Each point and rebuttal an idea generates would also be semantically-tagged, allowing anyone to see at a glance the direct agreements and contradictions and the supporting evidence for each view. Supported by machine intelligence, we will collectively trace back to the assumptions that were made and the data that was used, applying techniques like reductio ad absurdum to learn from our mistakes.

It’s reasonable to assume that the questions we ask will yield multiple answers, partly because there is famously more than one way to crack a nut, and partly because each person’s unique life experience leads to them framing and solving a problem in different ways. Harnessing the power of intercreativity (the process of making things or solving problems together) will stockpile solutions and multiple paths towards them, leading to a much richer form of education than the ‘one-size-fits-all’ methods we are currently limited to.

Narrowing our focus down to the individual, one problem with today’s Web is that it knows little about you, and therefore has no model of how you learn or what you do and don’t know. The rapid advances being made in storage, sensor and processor technologies will enable a person to automatically capture and record all the various forms of information one engages with, storing it in a personal digital archive. Everything about the user’s life will be logged and continually processed by machine intelligences, learning about user behaviour and interaction so as to deliver relevant information whenever it’s needed. This may not just be answers, but dynamically-generated explanatory paths that must be understood if the answer is to be illuminating. Our collective endeavour to create a global database that’s organized according to concepts and ways to understand them will generate lots of information about how concepts relate, who believes them and why, what they’re useful for and so on. A smart Web would find the most appropriate path between what you already know and what you need to learn. Your unique learning style will be understood by the Omninet, which will filter, select, and present information in the form of pictures, stories, examples, abstractions — the best and most meaningful explanation of what you need to know. The system will scrutinize explanations that don’t work or that tend to raise particular questions, using various forms of feedback to adjust its explanatory paths.

This won’t be a case of machines taking over the job of teaching, since the Web will harness both the power of networked computing platforms and people. The various explanations and paths connecting them will be created by human activity on the Web, for at least as long as machine intelligence is confined to ‘narrow’ AI. The job of narrow AI will be to present it in the best form such as chart, graphics, or natural-language text, thereby converting abstract concepts to the correct domain-specific language that is most appropriate for any particular user. Eventually, software agents will handle this task but in the meantime human elements will offer realtime assistance, providing the key knowledge, logic and pattern-recognition capabilities AI can’t yet handle.

As well as training AI to perform human capabilities, we should also find ways to enable people to better comprehend data more suited to machine intelligence. We have had some success at representing higher-dimensional mathematical models in a form that can be understood by our brains. Many of Maurice Escher’s paintings attempted to convey the mathematical landscape. We talked earlier about hyperbolic space in which modular forms exist. ‘Circle Limit IV’ embeds the hyperbolic world into the two-dimensional page. The program ‘Mathematica’ can produce 2-D geometric shapes associated with the different values of ‘n’ in Fermat’s equation. Each equation has its own shape, but one thing they share in common is that every single one is punctured with many holes. The larger the number of holes there are in the shape, the larger the value of ‘n’ in the corresponding equation. Before Fermat’s conjecture was proved, the fact that there must always be more than one hole helped Differential Geometrists to make a major contribution towards understanding Fermat’s Last Theorem.

Static images such as these use only a portion of the human visual system, which evolved to visualise a 3-D space that can change in time. Systems expert Dan Clemmenson wrote, ‘modelling mathematical problems frequently require a multi-dimensional model, which the software must collapse into a 3-D and a time-D that makes visual sense. Colour, intensity and texture can be used to represent aspects of the problem’.

No doubt, representing multi-dimensional models in a form we evolved to understand causes a great deal of information to be lost. The difference between these representations and the ability to actually comprehend hyperbolic space might be compared to the difference of seeing the world in full colour, as opposed to black and white. The vast majority of mammals, by the way, have pretty poor perceptions of colour, due to their having only two classes of colour-sensitive cells in the retina (technically known as ‘dichromatic’ vision). The exception to this rule are primates, whose retina are equipped with a third class of colour-sensitive cells, providing ‘trichromatic’ vision. Actually, thanks to biotechnology, a rat can now count itself among those animals blessed with trichromatic vision. Scientists reprogrammed its genes so that it would manufacture the extra class of cells. The surprising thing is that the rat’s brain was able to process this extra visual information, despite the fact that no rodent’s retina ever sent trichromatic visual data to it before. Rather than limit information to a form we evolved to work with, we could augment our senses and neural architecture in order to perceive that which is beyond our evolved capabilities. Examples of information that our brains are pretty poor at comprehending are the complex patterns that exist in financial, scientific and product data. Eventually, brain implants based on massively distributed nanobots will create new neural connections by communicating with each other and with our biological neurons, break existing connections by suppressing neural firing, add completely mechanical networks, allow us to interface intimately with computer programs and AI; vastly improve all our sensory, pattern-recognition and cognitive abilities.

Print Friendly, PDF & Email