Skip to main content

A Robot in Every Home

The leader of the PC revolution predicts that the next hot field will be robotics

Imagine being present at the birth of a new industry. It is an industry based on groundbreaking new technologies, wherein a handful of well-established corporations sell highly specialized devices for business use and a fast-growing number of start-up companies produce innovative toys, gadgets for hobbyists and other interesting niche products. But it is also a highly fragmented industry with few common standards or platforms. Projects are complex, progress is slow, and practical applications are relatively rare. In fact, for all the excitement and promise, no one can say with any certainty when—or even if—this industry will achieve critical mass. If it does, though, it may well change the world.

Of course, the paragraph above could be a description of the computer industry during the mid-1970s, around the time that Paul Allen and I launched Microsoft. Back then, big, expensive mainframe computers ran the back-office operations for major companies, governmental departments and other institutions. Researchers at leading universities and industrial laboratories were creating the basic building blocks that would make the information age possible. Intel had just introduced the 8080 microprocessor, and Atari was selling the popular electronic game Pong. At homegrown computer clubs, enthusiasts struggled to figure out exactly what this new technology was good for. But what I really have in mind is something much more contemporary: the emergence of the robotics industry, which is developing in much the same way that the computer business did 30 years ago. Think of the manufacturing robots currently used on automobile assembly lines as the equivalent of yesterday's mainframes. The industry's niche products include robotic arms that perform surgery, surveillance robots deployed in Iraq and Afghanistan that dispose of roadside bombs, and domestic robots that vacuum the floor. Electronics companies have made robotic toys that imitate people or dogs or dinosaurs, and hobbyists are anxious to get their hands on the latest version of the Lego robotics system.

Meanwhile some of the world's best minds are trying to solve the toughest problems of robotics, such as visual recognition, navigation and machine learning. And they are succeeding. At the 2004 Defense Advanced Research Projects Agency (DARPA) Grand Challenge, a competition to produce a robotic vehicle capable of navigating autonomously a rugged 142-mile course through the Mojave Desert, the top competitor traveled just 7.4 miles before breaking down. In 2005 five vehicles covered the complete distance. And in November 2007 six vehicles completed a 60-mile course through a simulated urban environment in which they were required to merge with moving traffic, traverse busy intersections, avoid obstacles and find parking. (In another intriguing parallel between the robotics and computer industries, DARPA also funded the work that led to the creation of Arpanet, the precursor to the Internet.)


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


What is more, the challenges facing the robotics industry are similar to those we tackled in computing three decades ago. Robotics companies have no standard operating software that could allow popular application programs to run in a variety of devices. The standardization of robotic processors and other hardware is limited. Whenever somebody wants to build a new robot, they usually have to start from square one.

Despite these difficulties, when I talk to people involved in robotics—from university researchers to entrepreneurs, hobbyists and high school students—the level of excitement and expectation reminds me so much of that time when Paul Allen and I looked at the convergence of new technologies and dreamed of the day when a computer would be on every desk and in every home. And as I look at the trends that are now starting to converge, I can envision a future in which robotic devices will become a nearly ubiquitous part of our day-to-day lives. I believe that technologies such as distributed computing, voice and visual recognition, and wireless broadband connectivity will open the door to a new generation of autonomous devices that enable computers to perform tasks in the physical world on our behalf. We may be on the verge of a new era, when the PC will get up off the desktop and allow us to see, hear, touch and manipulate objects in places where we are not physically present.

[break] From Science Fiction to Reality

THE WORD “ROBOT” was popularized in 1921 by Czech playwright Karel ˇCapek, but people have envisioned creating robotlike devices for thousands of years. In Greek and Roman mythology, the gods of metalwork built mechanical servants made from gold. In the first century A.D., Heron of Alexandria—the great engineer credited with inventing the first steam engine—designed intriguing automatons, including one said to have the ability to talk. Leonardo da Vinci's 1495 sketch of a mechanical knight, which could sit up and move its arms and legs, is considered to be the first plan for a humanoid robot.

Over the past century, anthropomorphic machines have become familiar figures in popular culture through books such as Isaac Asimov's I, Robot, movies such as Star Wars and television shows such as Star Trek. The popularity of robots in fiction indicates that people are receptive to the idea that these machines will one day walk among us as helpers and even as companions. Nevertheless, although robots play a vital role in industries such as automobile manufacturing—where there is about one robot for every 10 workers—we have a long way to go before real robots catch up with their science-fiction counterparts.

One reason for this gap is that it has been much harder than expected to give robots the capabilities that humans take for granted—for example, the abilities to orient themselves with respect to the objects in a room, to respond to sounds and interpret speech, and to grasp objects of varying sizes, textures and fragility. Even something as simple as telling the difference between an open door and a window can be devilishly tricky for a robot.

But researchers are starting to find the answers. One trend that has helped them is the increasing availability of tremendous amounts of computer power. One megahertz of processing power, which cost more than $7,000 in 1970, can now be purchased for just pennies. The price of a megabit of storage has seen a similar decline. The access to cheap computing power has permitted scientists to work on many of the hard problems that are fundamental to making robots practical. Today, for example, voice-recognition programs can identify words quite well, but a far greater challenge will be building machines that can understand what those words mean in context. As computing capacity continues to expand, robot designers will have the processing power they need to tackle issues of ever greater complexity.

Another barrier to the development of robots has been the high cost of hardware, such as sensors that enable a robot to determine the distance to an object as well as motors and servos that allow the robot to manipulate objects with strength and delicacy. But prices are dropping fast. Laser range finders used in robotics to measure distance with precision that cost about $10,000 a few years ago can be purchased today for about $2,000. And new, more accurate sensors based on ultrawideband radar are available for even less.

Now robot builders can also add Global Positioning System chips, video cameras, array microphones (which are better than conventional microphones at distinguishing a voice from background noise), and a host of additional sensors for a reasonable expense. The resulting enhancement of capabilities, combined with expanded processing power and storage, allows today's robots to do things such as vacuum a room or help to defuse a roadside bomb—tasks that would have been impossible for commercially produced machines just a few years ago.

[break] A BASIC Approach

IN FEBRUARY 2004 I visited a number of leading universities, including Carnegie Mellon University, Cornell University and the University of Illinois, to talk about the powerful role that computers can play in solving some of society's most pressing problems. My goal was to help students understand how exciting and important computer science can be, and I hoped to encourage a few of them to think about careers in technology. At each university, after delivering my speech, I had the opportunity to see some of the most interesting research projects in the school's computer science department. Almost without exception, I was shown at least one project that involved robotics.

At that time, my colleagues at Microsoft were also hearing from people in academia and at commercial robotics firms who wondered if our company was doing any work in robotics that might help them with their own development efforts. We were not, so we decided to take a closer look. I asked Tandy Trower, a member of my strategic staff and a 26-year Microsoft veteran, to speak with people across the robotics community. What he found was universal enthusiasm for the potential of robotics and an industry-wide desire for tools that would make development easier. “Many see the robotics industry at a technological turning point where a move to PC architecture makes more and more sense,” Tandy wrote in his report to me after his fact-finding mission. “As Red Whittaker, leader of [Carnegie Mellon's] entry in the DARPA Grand Challenge, recently indicated, the hardware capability is mostly there; now the issue is getting the software right.”

Back in the early days of the personal computer, we realized that we needed an ingredient that would allow all of the pioneering work to achieve critical mass, to coalesce into a real industry capable of producing truly useful products on a commercial scale. What was needed, it turned out, was Microsoft BASIC. When we created this programming language in the 1970s, we provided the common foundation that enabled programs developed for one set of hardware to run on another. BASIC also made computer programming much easier, which brought more and more people into the industry. Although a great many individuals made essential contributions to the development of the personal computer, Microsoft BASIC was one of the key catalysts that made the PC revolution possible.

After reading Tandy's report, it seemed clear to me that before the robotics industry could make the same kind of quantum leap that the PC industry made 30 years ago, it, too, needed to find that missing ingredient. So I asked him to assemble a small team that would work with people in the robotics field to create a set of programming tools that would provide the essential plumbing so that anybody interested in robots could easily write robotic applications that would work with different kinds of hardware. The goal was to see if it was possible to provide the same kind of foundation for integrating hardware and software into robot designs that Microsoft BASIC provided for computer programmers.

Tandy's robotics group has drawn on a number of advanced technologies developed by a team working under the direction of Craig Mundie, Microsoft's chief research and strategy officer. One such technology will help solve one of the most difficult problems facing robot designers: how to simultaneously handle all the data coming in from multiple sensors and send the appropriate commands to the robot's motors, a challenge known as concurrency. A conventional approach is to write a traditional, single-threaded program—a long loop that first reads all the data from the sensors, then processes this input and finally delivers output that determines the robot's behavior, before starting the loop all over again. The shortcomings are obvious: if your robot has fresh sensor data indicating that the machine is at the edge of a precipice, but the program is still at the bottom of the loop calculating trajectory and telling the wheels to turn faster based on previous sensor input, there is a good chance the robot will fall down the stairs before it can process the new information.

Concurrency is a challenge that extends beyond robotics. Today as more and more applications are written for distributed networks of computers, programmers continue to struggle to figure out how to efficiently orchestrate code running on many different servers at the same time. And as computers with a single processor are replaced by machines with multiple processors and “multicore” processors—integrated circuits with two or more processors joined together for enhanced performance—software designers will need a new way to program desktop applications and operating systems that solves the problem of concurrency.

One approach to concurrency is multithreaded programming that allows data to travel along many paths. But as any developer who has written multithreaded code can tell you, this is one of the hardest tasks in programming. The answer that Craig's team has devised is something called the concurrency and coordination runtime (CCR). The CCR is a library of functions—sequences of software code that perform specific tasks—that makes it easy to write multithreaded applications that coordinate a number of simultaneous activities. Designed to help programmers take advantage of the power of multicore and multiprocessor systems, it is now being used to program scientific modeling applications, to construct sensor networks and to develop software for financial transaction companies. The CCR also turns out to be ideal for robotics. By drawing on this library to write their programs, robot designers can dramatically reduce the chances that one of their creations will run into a wall because its software is too busy sending output to its wheels to read input from its sensors.

In addition to tackling the problem of concurrency, the work that Craig's team has done will also simplify the writing of distributed robotic applications through a technology called decentralized software services (DSS). DSS enables developers to create applications in which the services—the parts of the program that read a sensor, say, or control a motor—operate as separate processes that can be orchestrated in much the same way that text, images and information from several servers are aggregated on a Web page. Because DSS allows software components to run in isolation from one another, if an individual component of a robot fails, it can be shut down and restarted—or even replaced—without having to reboot the machine. Combined with broadband wireless technology, this architecture makes it easy to monitor and control a robot from a remote location using a Web browser.

What is more, a DSS application controlling a robotic device does not have to reside entirely on the robot itself but can be distributed across more than one computer. As a result, the robot can be a relatively inexpensive device that delegates complex processing tasks to the high-performance hardware found on today's home PCs. I believe this advance will pave the way for an entirely new class of robots that are essentially mobile, wireless peripheral devices that tap into the power of desktop PCs to handle processing-intensive tasks such as visual recognition and navigation. And because these devices can be networked together, we can expect to see the emergence of groups of robots that can work in concert to achieve goals such as mapping the seafloor or planting crops.

These technologies are a key part of a software development kit built by Tandy's team for the robotics industry. The kit also includes tools that make it easier to create robotic applications using a wide range of programming languages. One example is a simulation tool that lets robot builders test their applications in a three-dimensional virtual environment before trying them out in the real world. The software development kit was created to provide an affordable, open platform that allows robot developers to readily integrate hardware and software into their designs, and it has been downloaded more than 150,000 times since it was released in 2006. We are also working with a number of universities to support robotic research programs. One example is the Institute for Personal Robots in Education at the Georgia Institute of Technology and Bryn Mawr College, which was created to explore the use of robots as a way to engage students in the study of engineering, math and science.

[break] Should We Call Them Robots?

HOW SOON WILL ROBOTS become part of our day-to-day lives? According to the International Federation of Robotics, about two million personal robots were in use around the world in 2004, and another seven million will be installed by the end of this year. In South Korea the Ministry of Information and Communication hopes to put a robot in every home there by 2013. The Japanese Robot Association predicts that by 2025, the personal robot industry will be worth more than $50 billion a year worldwide, compared with about $5 billion today.

As with the PC industry in the 1970s, it is impossible to predict exactly what applications will drive this new industry. It seems quite likely, however, that robots will play an important role in providing physical assistance and even companionship for the elderly. Robotic devices will probably help people with disabilities get around and extend the strength and endurance of soldiers, construction workers and medical professionals. Robots will maintain dangerous industrial machines and handle hazardous materials. They will enable health care workers to diagnose and treat patients who may be thousands of miles away, and they will be a central feature of security systems and search-and-rescue operations.

Although a few of the robots of tomorrow may resemble the anthropomorphic devices seen in Star Wars, most will look nothing like the humanoid C-3PO. In fact, as mobile peripheral devices become more and more common, it may be increasingly difficult to say exactly what a robot is. Because the new machines will be so specialized and ubiquitous—and look so little like the two-legged automatons of science fiction—we probably will not even call them robots. But as these devices become affordable to consumers, they could have just as profound an impact on the way we work, communicate, learn and entertain ourselves as the PC has had over the past 30 years.

[break] THE AUTHOR

BILL GATES is co-founder and chairman of Microsoft, the world's largest software company. While attending Harvard University in the 1970s, Gates developed a version of the programming language BASIC for the first microcomputer, the MITS Altair. In his junior year, Gates left Harvard to devote his energies to Microsoft, the company he had begun in 1975 with his childhood friend Paul Allen. In 2000 Gates and his wife, Melinda, established the Bill & Melinda Gates Foundation, which focuses on improving health, reducing poverty and increasing access to technology around the world.