Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Big data: The next Google

What will happen in the next 10 years?

Ten years ago this month, Google's first employee turned up at the garage where the search engine was originally housed. What technology at a similar early stage today will have changed our world as much by 2018? Nature asked some researchers and business people to speculate — or lay out their wares. Their responses are wide ranging, but one common theme emerges: the integration of the worlds of matter and information, whether it be by the blurring of boundaries between online and real environments, touchy-feely feedback from a phone or chromosomes tucked away on databases.

Credit: N. Spencer

Bill Buxton

Principal researcher, Microsoft,Toronto, Canada


I subscribe to Melvin Kranzberg's second law of technology: invention is the mother of necessity. Although technologies are created to fulfil needs, each also creates them; the next generation of technologies will deliver the promises of what we already have.

The history of communication technologies over the past century tells me that anything that's going to impact on the next ten years is going to be ten years old already. (The components that made Google possible ten years ago were already there ten years earlier, with the creation of the web.) One prime candidate is electronic paper, displays that are as easy to view in ambient light conditions as paper and that consume hardly any power. It started with E Ink a decade ago; now we are seeing it in devices such as Amazon's Kindle, which I would say has not yet matured but has certainly reached late adolescence. Kindle and other readers are really like the Ford Model T in terms of what will be available in five years.

I think with this technology will come a dramatic change in our attitude towards paper. Our attachment to paper and books is wonderful, charming and quite understandable. I can't stand reading stuff on my computer. But this technology will make us question whether we can really afford the 500,000 trees that are consumed by publishing and newsprint in North America each week.

Credit: N. Spencer

Vincent Hayward

Professor of engineering, Pierre and Marie Curie University, Paris, France


Ten years ago, if you mentioned the word 'haptics' most people would think you were talking about some form of liver disease. Interfaces that provide tactile feedback have been in an innovator-driven 'push' mode; they have been technologically challenging, expensive and restricted to niches. Now there is a public pull, thanks to the spread of touch-screen devices. The objective is to make the interface more intuitive and less reliant on vision — something you can use without looking at it. Haptics makes that possible.

Two or three mobile-phone manufacturers have products on the market with haptic features, and some car companies are doing the same. The feedback acts like an acknowledgement, so you can feel when an onscreen button has been pressed. But also there is something more basic. As animals we operate on the basis of anticipation. Visual interfaces reduce our ability to anticipate because we are touching something that is not there; there is no anticipated sensation and the sensory consequences to our movements are unsatisfactory. Haptic feedback gives us what our minds anticipate; it completes the control loop.

Right now haptic displays are mostly capable of creating only single isolated sensations of contact, or of toggling through menus. But texture, shape and 'compliance' will become more refined and affordable. A dry, flat screen will be able to simulate the feel of fur or wetness.

Credit: N. Spencer

Ian Pearson

Futurizon consultancy, Ipswich, UK


We're crying out for technology that will allow us to combine what we can do on the Internet with what we do in the physical world. That's why the Nintendo Wii has been so successful. One technology that springs to mind is the video visor, which gives you a computer image superimposed over the world around you.

These have been around for a few years, but they currently have pretty low resolution. The resolution will improve and the cost will come down; at the same time demand will grow because the visors can provide information to people on the move. People have their iPhones and Blackberries with lots of data and functions but they want bigger displays. Wearing visors may seem odd at first, but then people used to stand out when mobile phones and Bluetooth headsets first came out. Now everyone uses them.

When you start to combine visor graphics with more accurate global-positioning data, as will be provided by the European Galileo satellites, you can overlay online information onto the world around you. So as you're walking down a busy city street you will be able to see reviews of shops and restaurants, adverts for services, other people who have similar interests to you, or whatever.

When you are wearing a visor your surroundings can have a completely different appearance: a burger restaurant can look like a giant burger without flouting planning laws. You could be seen as your Second Life virtual avatar. Or Johnny Depp, or Claudia Schiffer. You get the best of both worlds.

Leo Kärkkäinen

Chief visionary, Nokia Research Center, Espoo, Finland


Ordinary products are going to have memories that store their entire history from cradle to grave, and that consumers can easily access.

Radio-frequency identification tags are a good option because they are already widely used to track inventory and to control theft. They are cheap and can be powered by an outside power source, such as the radio signal from the device being used to read them. But there may be another enabling technology that wins out.

Near-field communication systems already allow a phone to be used like a smart card for a travel pass or as an electronic wallet to pay for goods. If that technology can talk to the things you buy, as well as the systems through which you pay for them, it will enable consumers to choose not to buy goods that are unhealthy, allergenic, have used environmentally unfriendly methods or employed child labour.

As with many technologies, it could potentially be used for bad purposes; we have to ensure that privacy functions are built in to the system to put the consumer in control of whether they want to be tracked.

Credit: N. Spencer

Helen Greiner

Chairman and co-founder, iRobot, Burlington, Massachusetts


Others have said it before, but I now think it's a safe bet to say that within the next ten years robots will become a lot more commonplace The key is autonomy. Unless a robot has 'mission based' autonomy, it needs to be controlled by a human; this makes sense for something critical such as a military operation, but is often just a waste of time. Now we're seeing robotic agents that can go out and act on their own: ploughing fields, mowing lawns or cleaning offices. Increasingly autonomous robots will be capable of more complex and sophisticated behaviours, taking on more complex chores and tasks in agriculture, construction, logistics, care of the elderly, the military and the home.

To get autonomy you need perception of the environment, an intelligent software architecture, a physical system or body and behaviours. Our Roomba vacuum cleaner is an example of autonomy with all these features.

We've now created a sort of robotic operating system, Aware 2.0, which runs robotic behaviours as though they were software applications. It greatly simplifies the creation of new robots, as does modularity in the mechanical design, the perceptive systems and the components of intelligence. That makes it possible to build on past successes; once you have developed a navigation behaviour, for example, it can be used in other platforms.

Esther Dyson

Investor in for-profit and not-for-profit start-ups, New York


I'm on the board of 23andMe of Mountain View, California, which makes genetic information accessible to its owners — and lets them share it for research if they want to.

For now, 23andMe looks only at common genetic variations, which mostly show risk factors — there are only a few conditions for which a genetic anomaly indicates almost 100% risk, and even then you might not know the timing or intensity. Our service, which costs US$1,000, will become cheaper as the cost of the information processing, the chemistry and the imaging technology comes down and can be spread over a broader base of customers.

The first users are mostly benefactors; later users will be beneficiaries. As hundreds of thousands, and eventually millions, of people take part, the genetic information collected will enable us to know so much more through data mining, combined with analysis of the interactions of genes and other factors. We'll be able to pre-empt many diseases and treat others better. In addition, I hope this technology will change people's behaviour and encourage them to eat better and exercise more, because they'll have a better understanding of the impact of their behaviour on their health.

Everyone dies of something; your genome gives you hints of which causes are most likely for you. But it doesn't predict precisely or with certainty, or tell you when. People's level of understanding of statistics in relation to soccer or gambling always amazes me, so there is hope that people can likewise understand the difference between correlation and causation in genetics.

The following material is web-only additional text

Joi Ito

Co-founder of Infoseek Japan and chief executive of Creative Commons, Tokyo, Japan


The next big thing will come from connecting people and ideas together with a Google-like simplicity — making Wikipedia, Facebook and all sorts of other things completely seamless. It sounds obvious and yet it's hard to imagine. But then, before Google it was hard to imagine what search could be like. Before Tim Berners-Lee it was hard to imagine the web.

I think that a key part to it will be software that automatically gives attribution for the various parts of content we access and share. People want to share content with each other, but the infrastructure and legal framework makes it more difficult than it should be. Legal friction is holding back a lot of creativity. If you have software that works out who owns what for you and gives credit where it is due, and if it can support all different kinds of content, then you start to have a network that enables a great deal more creativity.

Anshe Chung

Avatar of Ailin Graef, the first person to achieve a net worth of more than a million dollars from profits earned in a virtual world, Second Life


I think that the physical and virtual will merge more and more over the next decade as three-dimensional (3D) environments become increasingly easy to use through normal browsers and mobile phones.

These 3D scenes will represent real people and real places — things of value. When I enter a 3D scene and know it is an up-to-date copy of Manhattan, and interact with other users who are either virtually present or even physically located in the real place, it becomes far more meaningful than a fantasy world or a game.

Social worlds such as Second Life have managed to create 3D communities of hundreds of thousands of people, but accepting the simple avatars and the environment requires learning and effort. There are several technologies that could help realize this. The first is the computer graphics with which to create photo-realistic images. The second is the means to capture huge parts of the physical world and add them to the 3D world. Companies such as Google and Microsoft have already started doing this using satellite images and huge amounts of imagery in cities, with users contributing by adding data and metadata. The third technology is representations of people that bring them into the space mentally and allow them to interact with it better.

Kevin Kelly

Founding executive editor, Wired magazine, Pacifica, California


The semantic web is very difficult to explain because there's nothing really to look at. Google had a sparse homepage — the semantic web doesn't have anything at all. But I think the total effect of it will be at least equal to that of Google.

The idea is that if everything on the web was described and reduced down to a noun, verb and predicate, as in a language, computers could 'read' the web. It would have meaning. Then machines can do a lot of the things normally done by people because they can suddenly read information. If you want to book a taxi to the airport, the semantic web gives a machine the ability to know certain things: it will know your flight times, that there are road works on the way to the airport, which cab firm you prefer, and so on. A second-order effect would be that the information would come to you, rather than you go to it.

But getting there is a chicken-and-egg problem. Hand coding is very laborious: the initial benefits are small. So until there's a critical mass it's difficult to persuade people to do it. The breakthrough for search engines was page ranking. With the semantic web the tipping point could come from something like an automated parser, which codes the meaning of content automatically. There are some websites, such as Twine, that are beginning to do it.

Sam Schillace

Google, Mountain View, California


Prior to Google, everyone said search was done. But the point was that search could have been a lot better. The same is true of browsers today.

On the web, simplicity matters more than completeness — the platform needs to be simple, ubiquitous and good enough. The browser is that platform. It means any screen you look at can be a window into your own personal, private cloud of information. I use three different computers every day but don't worry which of them a particular file, picture or e-mail is on, because they are online and my browser can find them.

The current generation of browsers can already run some pretty sophisticated applications without having to install software, and it's starting to extend to mobile devices too. The next generation of browsers, and the web applications that run on them, will make communication and collaboration even more transparent and let me focus on what I really want to do — connect with the person at the other end and get work done together. It will turn the web into a superconductor for interactions with other people and change the way we work pretty radically.

Additional information

Interviews by Duncan Graham-Rowe See also Editorial, page 1

Related links

Related links

Related links in Nature Research

Big data special

Computing 2020 special


Related external links

Bill Buxton

E Ink

Ian Pearson

Introduction to augmented reality


Aware 2.0

Vincent Hayward

Leo Kärkkäinen

Esther Dyson | EDventure


Joi Ito

Creative Commons

Anshe Chung

Kevin Kelly | The technium

Google Chrome browser

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Big data: The next Google. Nature 455, 8–9 (2008).

Download citation

  • Published:

  • Issue Date:

  • DOI:

Further reading


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing