Yoshua Bengio is one of three computer scientists who last week shared the US$1-million A. M. Turing award — one of the field’s top prizes.
The three artificial-intelligence (AI) researchers are regarded as the founders of deep learning, the technique that combines large amounts of data with many-layered artificial neural networks, which are inspired by the brain. They received the award for making deep neural networks a “critical component of computing”.
The other two Turing winners, Geoff Hinton and Yann LeCun, work for Google and Facebook, respectively; Bengio, who is at the University of Montreal, is one of the few recognized gurus of machine learning to have stayed in academia full time.
But alongside his research, Bengio, who is also scientific director of the Montreal Institute for Learning Algorithms (MILA), has raised concerns about the possible risks from misuse of technology. In December, he presented a set of ethical guidelines for AI called the Montreal declaration at the Neural Information Processing Systems (NeurIPS) meeting in the city.
Nature sat down with Bengio in London in January.
Do you see a lot of companies or states using AI irresponsibly?
There is a lot of this, and there could be a lot more, so we have to raise flags before bad things happen. A lot of what is most concerning is not happening in broad daylight. It’s happening in military labs, in security organizations, in private companies providing services to governments or the police.
What are some examples?
Killer drones are a big concern. There is a moral question, and a security question. Another example is surveillance — which you could argue has potential positive benefits. But the dangers of abuse, especially by authoritarian governments, are very real. Essentially, AI is a tool that can be used by those in power to keep that power, and to increase it. Another issue is that AI can amplify discrimination and biases, such as gender or racial discrimination, because those are present in the data the technology is trained on, reflecting people’s behaviour.
What sets the Montreal declaration apart from similar initiatives?
I think it was the first one that involved not just AI researchers, but a broad spectrum of scholars in the social sciences and the humanities, as well as the public — in a profound way. That led to changes: we went from seven to ten principles as a result of consultations with experts and the public. Organizations can pledge to follow those principles.
What is the most appropriate forum for discussions on AI ethics?
We’re trying to create an organization in Montreal that will do just that: the International Observatory on the Societal Impacts of Artificial Intelligence and Digital Technologies. It should bring in all of those actors: governments, because they are the ones who are going to take action; civil-society experts, which means both experts in AI technology and in the social sciences, health care and political science; and companies that are building these products.
But we have to do it carefully — because, of course, companies might push things in a direction that favours their bottom line.
Do you see this initiative leading to government or international regulations for AI?
Yes. Self-regulation is not going to work. Do you think that voluntary taxation works? It doesn’t. Companies that follow ethical guidelines would be disadvantaged with respect to the companies that do not. It’s like driving. Whether it’s on the left or the right side, everybody needs to drive in the same way; otherwise, we’re in trouble.
You have expressed concern that corporations have ‘stolen’ talent from academia. Is this still an issue?
It’s continuing. But there are also some good things happening. We’ve been successful in Montreal because of the AI ecosystem growing, and getting a sort of reverse brain drain. People from outside Canada are coming to Canada to do research in AI.
Another thing happening in Montreal — and I think in other places around the world — is that academic-level researchers who are working for industry are taking on adjunct-faculty roles to supervise or co-supervise grad students in universities. That’s happening at MILA.
We’re also working on training students. We’re doubling the number of professors in machine learning in Montreal, thanks in part to Canadian government investment with this pan-Canadian strategy on investment in AI.
Do you think Europe lags behind China and the United States in AI?
Yes. But I don’t think that Europe should accept that. Europe has huge potential to become a leader. There are outstanding universities in Europe. In fact, many students we have at MILA come from Europe. There is also a recent but vibrant tech community of start-up companies in Europe, in several places. And governments are starting to realize its importance. The French government was probably the first European government to make a big move in that direction.
What will be the next big thing in AI?
Deep learning, as it is now, has made huge progress in perception, but it hasn’t delivered yet on systems that can discover high-level representations — the kind of concepts we use in language. Humans are able to use those high-level concepts to generalize in powerful ways. That’s something that even babies can do, but machine learning is very bad at.
We have this ability to reason about things that don’t actually happen in the data. We’ve made some progress with generative adversarial networks [a technique that sets a generative network in competition with an image-recognition network, to help both to improve their performance], for example. But humans are much better than machines, and my guess is that one of the important ingredients is the understanding of cause and effect.
This interview has been edited for length and clarity.