Hello Nature readers, would you like to get this Briefing in your inbox free every week? Sign up here.
GPT-4 generates fake medical data
The large language model GPT-4 can be coaxed into producing fake clinical-trial data to support an unverified scientific claim. The AI-generated data compared the outcomes of two surgical treatments for an eye condition and suggested that one procedure is better than the other. In real trials, the two lead to similar outcomes. Although the data don’t hold up to close scrutiny by authenticity experts, “to an untrained eye, this certainly looks like a real data set”, says biostatistician Jack Wilkinson.
Reference: JAMA Ophthalmology paper
‘Artificial brainstorming’ makes AI creative
Several AI agents can work together to solve chess puzzles that tend to stump computers. Researchers tried weaving together up to ten versions of the chess AI AlphaZero, each trained for different strategies. A ‘virtual matchmaker’ algorithm decides which agent has the best chance of succeeding. The system was able to solve more chess puzzles than AlphaZero alone: the artificial brainstorming session “leads to creative and effective solutions that one would miss without doing this exercise”, says AI researcher Antoine Cully.
Reference: arXiv preprint (not peer reviewed)
Data centres strain Africa’s water resources
‘Thirsty’ computing hubs could put pressure on already stretched water resources in sub-Saharan Africa and other regions where drinking water is scarce. Data centres that power AI technologies have a huge ‘water footprint’: they need water for cooling and contribute to power plants’ water usage through their vast electricity consumption. Yet water scarcity is rarely considered when deciding where to build data centres, says computer scientist Mohammad Atiqul Islam, co-author of a study of the problem. “Typically, companies care more about performance and cost.”
Reference: arXiv preprint (not peer reviewed)
Features & opinion
What the OpenAI drama means for progress
A debacle at OpenAI has highlighted concerns that commercial forces are acting against responsible development of AI. The company that built ChatGPT suddenly fired its co-founder and chief executive Sam Altman on 17 November, only to reinstate him five days later, after staff revolted. “The push to retain dominance is leading to toxic competition,” says Sarah Myers West at the AI Now Institute. She is among those who worry that products are appearing before anyone fully understands their behaviour, uses and misuses. “We need to start by enforcing the laws we have right now,” she says.
AI could find research ‘blind spots’
AI can propose undiscovered links between existing findings — already a routine process in areas including drug discovery. Scientists want to push this further to automatically generate broad, clear hypotheses even when a field’s underlying principles remain poorly understood. Large language models, for example, are known to ‘hallucinate’ statements that might not be correct but ‘look true’. “That’s exactly what a hypothesis is,” says economist Sendhil Mullainathan.
Video: How to 3D print a robot
An error-correcting 3D printer can create complex designs — such as a robotic hand with soft plastic muscles and rigid plastic bones — in one go. Combining different materials in the same print run is difficult. This inkjet-type printer builds 3D structures by spraying layer after layer of material. It keeps an electronic eye on any accidental lumps or bumps and compensates for them in the next layer. This removes the need for messy mechanical smoothing, which usually limits the materials that can be used.
Reference: Nature paper