SHADES OF GREY: An essay by Extropia DaSilva

NATURAL? ARTIFICIAL?

The fact is that we cannot separate the world so neatly into ‘natural’ things over here and artificial things over there, because there exists a smooth continuum of examples blending one with the other. From the myriad phenotypes that grow from a single cell, to animals like spiders that manufacture an extended phenotype from bodily materials, to animals like hermit crabs that hunt down a single useful discarded item, to beavers etc. that gather much material and put it to a purpose it was not evolved to serve, the ‘artificial’ and the ‘natural’ are closely related.

If there is a difference between the examples listed in the preceding paragraph, and our technology, it is that these tools developed no faster than natural selection. On the other hand, human tools have developed at a pace that, in comparison to evolution, lead to our modern society in an eye blink. In just a couple of million years, we went from seeking caves in which to shelter (like a hermit crab seeking discarded shells) to emulating termites by building homes out of hardened mud, to our modern cities with their towering skyscrapers of glass and steel. This rapid progress has seen our technology grow into something that appears increasingly alienated from the natural environment. Deserts, forests and oceans are natural environments. Farmland seems very natural too, but it is actually engineered by us to grow crops whose genes were selected by our guiding hand, as opposed to natural selection. Few people feel they are in a natural environment when in a city like New York. Straight lines and right angles dominate, both of which seem abhorrent to nature. At night, when celestial mechanics dictate all should be dark, our urban environments blaze with light and buzz with activity.

From the hunter-gatherer society to the information age, the trend has been for our rapidly-growing technologies to become increasingly distinct from the natural world. So when we anticipate a future Singularity, we assume that the super-duper technological growth that marks its arrival will ensure it is as unmissable as the Hoover dam. This vision of an ‘omninet’, though, dictates that the trend is now reversing, at least where information tech is concerned. Vinge predicts embedded networks spreading ‘beneath the net, supporting it as much as plankton supports the ocean ecology’, will grow networks so ubiquitous that they comprise a sort of cyberspace Gaia merged with the biosphere. For users of this network, ease-of-operation has moved beyond the point where connecting to the Web and accessing its functions is as effortless as getting water to flow from the tap. This is web surfing as intuitive as breathing, as natural as the experience of hearing these words spoken in your mind as you read. Terms like logging off and logging on cease to have meaning because now the net is omnipresent (hence, ‘Omninet’). Sci-fi visions of becoming immersed in cyberspace imagined this would ocurr via us ‘jacking in’ by plugging a cable into our brains. Cyberspace might indeed enter our brains, albeit via a network of nanoscale transponders communicating with neurons and each other on a local area wireless network. But, ultimately, if this idea of an Omninet is valid, immersion will happen because the Internet spreads out into ubiquitous sensors that pervade the environment.

THE SEMANTIC WEB.

The sheer quantity of data and diversity of knowledge that will exist in this age would overwhelm us, absolutely requiring advanced machine intelligence to help organize and make sense of it. Right now, the Internet is a valuable source of information, but it has a weakness in that documents written in HTML are designed to be understood by people rather than machines. This is unlike the spreadsheets, word processors and other applications that are stored on your computer, which have an underlying machine-readable data. The job of viewing, searching and combining the information contained in address books, spreadsheets and calendars is made relatively simple, thanks to a division of labour between the goal-setting, pattern-recognition and decision-making supplied by humans and the storage and retrieval, correlation and calculation and data presentation handled by the computer.

It’s currently much more difficult to effectively search, manipulate and combine data on the Internet, because that additional layer — data that can be understood by machines — is missing. The ‘Semantic Web’ is an ongoing effort to resolve this deficiency. At its heart lies ‘RDF’ or ‘Resource Description Framework’. If HTML is a mark-up language for text, making the Web something like a huge book, RDF is a mark-up language for data, and the Semantic Web is a huge database comprised of interconnected terms and information that can be automatically followed.

Once there’s a common language that allows computers to represent and share data, they will be in a better position to understand new concepts as people do — by relating it to things they already know. They will also be more capable of understanding that when one website uses the term ‘heart attack’ and another uses the term ‘myocardial infarction’, they are talking the same thing. They would, after all, have the same semantic tag. If you wanted to determine how well a project is going, the Semantic Web would make it easier to map the dependencies and relationships among people, meeting minutes, research and other material. If you went to a weather site, you could pull off that data and drop it into a spreadsheet. The structure of the knowledge we have about any content on the web will become understandable to computers, thanks to the Semantic Web’s inference layer that allows machines to link definitions. When thousands of concepts, terms, phrases and so on are linked together, we’ll be in a better position to obtain meaningful and relevant results, and to facilitate automated information gathering and research. As Tim Berners-Lee predicted, ‘the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines, leaving humans to provide the inspiration and intuition’.

The Semantic Web will bring us closer to Berners-Lee’s original vision for the Internet. He saw it not only as a tool for communication, but also as a creative, collaborative playspace where people would present their finished idea, but also leave a trail for others to see why things had been done a certain way. Science, in particular, would be greatly facilitated by the Semantic Web, since it would provide unprecedented access to each field’s dataset and the ability to perform an analysis on them. Researchers in disciplines like artificial intelligence, whose ultimate goals require the collaborative efforts of many scientific groups, will find the ability to perform very powerful searches across hundreds of sites and the ability to bridge barriers created by technical jargon immensely useful.

Speaking of AI, the vast storage capacity and wealth of information that will exist in an era of ubiquitous web access will make machine intelligence a real necessity, but at the same time having sensors everywhere and a wealth of information will make the task of building smarter machines easier. This is because an AI system that has to make a recommendation based on a few datapoints is bound to work less well than one with access to a lot of information about its users. With many sensors in the environment forming a network, it may be possible for computer intelligence to obtain necessary information without having to rely on complicated perception. A simple example would be a floor covered with pressure-sensitive sensors that track your footsteps, letting a robot know where you are and where you are headed. Also, contemporary experiments in which volunteers have had their daily lives closely monitored via wearable devices have revealed that up to 90 percent of what most people do in any day follows routines so complete, it requires just a few mathematical equations to predict their behaviour. Psychologist John Bargh explained, ‘most of a person’s everyday life is determined not by their conscious intentions and deliberate choices but by mental processes put into motion by the environment’. If that environment were full of sensors feeding information to software designed to learn from them, the Omninet would grow increasingly capable of anticipating our needs and presenting information at the exact moment it’s needed and in the correct context.

Again, this will require the inclusion of machine-readable data and networking embedded computers may increase the likelihood that this will be attended to with little human effort. A photograph’s location could be assigned using geographic tracking via GPS. Moreover, now that we have people uploading countless snapshots to the Web, there’s a vast amount of material with which to train object-recognition software. According to New Scientist, ‘robots and computer programs are starting to take advantage of the wealth of images posted online to find out about everyday objects. When presented with a new word, instead of using the limited index it has been programmed with, this new breed of automaton goes online and enters the word into Google, (using) the resulting range of images to recognise the object in the real world’. Eventually, each document type that we use to record and organize our daily lives will be tagged with data that allows the Omninet to identify the nature of each by analysing its form and content. And by linking this metadata, people will be able to increasingly rely on automated image recognition, natural language processing and full text/speech searches to hunt down particular websites, emails or to recollect barely remembered events with a few sparse phrases, sounds or images.

At this point, it might be worth remembering that significant outcomes can be a result of mundane causes. The transformation of the Internet into something like an omnipresent oracle is not the explicit goal of most R&D today. Semantic Web tools are mostly used for the more conservative purpose of coding and connecting companies’ data so that it becomes useful across the organization. Much of this re-organization is invisible to the consumer; perhaps the only outward sign is an increase in the efficiency with which financial data can be sorted, or the way improved, automated databases make shopping online less of a headache. The relationship is two-way, of course. Companies benefit from the metadata they obtain about their users, just as the user benefits from the increased efficiency brought about by the companies’ semantic tools. IBM offers a service that finds discussions of a client’s products on message boards and blogs, drawing conclusions about trends. Again, nothing startling here, just ways to improve market research. But these mundane tools and the conservative steps they enable are laying down the foundation upon which the next generation of Semantic Web applications will be built, and so it continues step by cumulative step. From the perspective of each consecutive step, the next immediate goal invariably seems just as mundane as ever, but as many thousands of these steps are taken, we progress smoothly toward tremendous technological advances. As disparate data sources get tied together with RDF and developers order terms according to their conceptual relationships to one another, the Web will be transformed from a great pile of documents that might hold an answer, to something more like a system for answering questions. Probably, the dream of a web populated by truly intelligent software agents automatically doing our bidding will be realised after we have established a global system for making data accessible through queries. EarthWeb co-founder, Nova Spivack, sees the progression toward a global brain occurring in these two steps: ‘First comes the world wide database… with no AI involved. Step two is the intelligent web, enabling software to process information more intelligently’.

Print Friendly, PDF & Email