Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • ADVERTISEMENT FEATURE Advertiser retains sole responsibility for the content of this article

Shaping tomorrow’s world through responsible AI

A child interacting with a robot. As devices and systems that depend on artificial intelligence become pervasive concerns about whether society is ready to manage the risks increase.Credit: Yuganov Konstantin/Shutterstock

Since bursting on to the AI scene in 2022, ChatGPT has become a part of everyday life. ChatGPT has seamlessly enhanced communication and boosted efficiency in everything from customer service, education, healthcare, IT and entertainment to academic research.

But the breathtaking uptake of ChatGPT has also heightened concerns about whether society is ready to manage the risks — from disinformation campaigns to bias.

To help address those concerns, 50 leading experts came together at the ‘International AI Cooperation and Governance Forum 2023’ in Hong Kong, 8–9 December 2023. The forum was organized by the Hong Kong University of Science and Technology (HKUST) in partnership with Tsinghua University, which is based in Beijing.

At the forum, the experts discussed safe and equitable access to AI, social acceptance and fear of AI, barriers to further development of AI systems, algorithm bias, AI security, and how to use AI in the governance of AI.

Who is in charge?

“We should always want AI technology to remain under human control,” Brad Smith, vice chair and president of Microsoft, told the forum. “We should call on it to serve humanity, to help us solve some of the biggest problems that affect everyone around the world.”

Nancy Ip, president of the Hong Kong University of Science and Technology, opens the ‘International AI Cooperation and Governance Forum 2023’ in Hong Kong.

Nancy Ip, HKUST’s president, agreed. “Governance is vital to advance and popularize the use of any new technology,” she said in her opening remarks.

Concerns about AI can be traced back to Greek mythology long before the term was coined in 1956, Stephen Cave, director of the Leverhulme Centre for the Future of Intelligence at Cambridge University, in the United Kingdom, told the forum.

Cave, a philosopher, and his team have analysed more than 300 movies, novels and works of non-fiction about AI, categorizing them according to the fundamental hopes and fears that are expressed in them. For example, the hope that AI will offer power over others (‘dominance’), or the hope of a life free from work (‘ease’). They concluded that society’s perceptions of AI are mostly not based on fact. Nonetheless, these perceptions may still influence those in charge of deployment and regulation, Cave pointed out.

Trusted gatekeepers

For Yoshua Bengio, a professor in the Department of Computer Science and Operations Research, Montréal University in Quebec, Canada, strong, enforceable regulation is critical. "While these powerful AI systems can yield numerous benefits, they can also cause significant harm," he says.

Bengio was the 2018 Turing Award Laureate for his pioneering work on deep learning, the AI method that teaches computers to process data in a way that is inspired by the human brain.

The misuse of powerful AI systems, particularly those available in open source, by malicious actors such as terrorists and criminals, is a major concern, he says. Open-source systems such as Alpha Go, for gaming, and Alpha Fold, for deciphering complex protein structures, now surpass human performance. Those superhuman abilities heighten concerns about other AI systems being used for nefarious purposes, such as disinformation campaigns, cyber-attacks, and even creation of weapons, including chemical ones.

Bengio advocated for limiting access to the most powerful AI systems to vetted experts who would be regulated. This would be safer, he argued, than leaving AI development to private companies or making it open access. Set up correctly, this so-called ‘structured access’, would make it easier for a third-party to audit models, spot safety or ethical failures, and make decisions about expanding access, he said.

A highlight of the forum was the roundtable discussion Developing a Global Framework for AI Governance.

In a panel discussion about global frameworks for AI Governance, Pascale Fung, director of the Center for Artificial Intelligence Research (CAiRE) and chair professor in the Department of Electronic and Computer Engineering at HKUST, raised concerns about another type of access — across language barriers.

Thousands of languages are used globally, yet today’s Large Language Models, the basis of ChatGPT, are trained predominantly in English. “Democratising AI is not just about enabling everyone to use it, but also empowering everyone to understand these models, and to build these models themselves as well,” she said.

On the same panel, mechanical engineer Jianrong Tan, from Zhejiang University in Hangzhou, China, proposed that AI will need to be deployed to mitigate potential AI harm. That perspective was shared by computer scientist Wen Gao, director of Peng Cheng Laboratory in Shenzhen, China. Tan also argued that an effective AI governance system would need to increase both the penetration and performance of AI.

Technical hitch

But others noted that there are technical barriers to improved performance. Since a wave of innovation propelled the rise of machine learning in 2012, demand for computational power for both training AI systems, and using them, has grown exponentially.

“Computational power has shifted from being a crucial support for artificial intelligence to a limiting factor,” said Qionghai Dai, dean of the School of Information Science and Technology at Tsinghua. “To accelerate the algorithms of artificial intelligence, we must revolutionise microprocessors.”

Photonic computing may help solve this problem. It uses light waves — rather than the electrical signals used in silicon chips — to carry and manipulate information at very high speeds. Recently Dai and his colleagues developed a chip that integrates optical and electronic computing. This photoelectronic prototype is 3,000 times faster1 than one of the most widely used commercial silicon AI chips, and has ultralow energy consumption, Dai told the forum.

No AI conference would be complete without an extended discussion of biases in algorithms, which may be caused by biases in training data or poor training procedures.

Joaquin Quiñonero Candela, AI technical fellow at LinkedIn (left), discusses AI ethics with Chloé Bakalar (right), chief ethicist at Meta.

LinkedIn’s first technical fellow, Joaquin Quiñonero Candela, has been focused on looking at the use of AI across the tech company for the last two years. He began studying AI bias in 2018 and emphasized that fairness is not simple. For example, if AI is used to identify job candidates, biased recruiters using unbiased AI algorithms will still end up with biased recruitment results.

Nonetheless, in an effort to operationalize AI fairness at scale, Quiñonero Candela and researchers at LinkedIn have deployed AI algorithms to identify candidates based on skills, not education or job titles2. This can bring recruiters more than 10 times the potential employees, says Quiñonero Candela, “It could also amplify underrepresented groups,” he told the forum.

Concepts of fairness illustrate the need for a global AI governance system to be based around values — fundamental principles that govern human behaviour — which may be different for different cultures, said Chloé Bakalar, chief ethicist at Meta (formerly the Facebook company).

Fung agreed, advocating for better communication between the technical community and those in governance to help ensure that AI regulatory policy is based on shared values. “Technical people need to understand human interests and values more, while policymakers should ‘listen’ to technology,” she said.

To conclude the event, Yike Guo, HKUST’s provost and secretary general of the forum’s organizing committee, said he hoped it would become an annual event for the world’s top minds to come together to steer AI development in the direction of benefiting humanity.

For more information please visit the International AI Cooperation and Governance Forum 2023.

References

  1. Chen, Y., Nazhamaiti, M., Xu, H. et al. Nature 623, 48–57 (2023).

    Article  PubMed  Google Scholar 

  2. Quiñonero Candela, J., Wu. Y., Hsu, B., et al. FAccT ’23: Proc. 2023 ACM Conference on Fairness, Accountability, and Transparency, 1213–1228 (2023).

    PubMed  Google Scholar 

Download references

Search

Quick links