Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain
the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in
Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles
and JavaScript.
A classic question in cognitive science is whether learning requires innate, domain-specific inductive biases to solve visual tasks. A recent study trained machine-learning systems on the first-person visual experiences of children to show that visual knowledge can be learned in the absence of innate inductive biases about objects or space.
AI tools such as ChatGPT can provide responses to queries on any topic, but can such large language models accurately ‘write’ molecules as output to our specification? Results now show that models trained on general text can be tweaked with small amounts of chemical data to predict molecular properties, or to design molecules based on a target feature.
Recent work has demonstrated important parallels between human visual representations and those found in deep neural networks. A new study comparing functional MRI data to deep neural network models highlights factors that may determine this similarity.