Reviews & Analysis

Filter By:

Article Type
Year
  • Tailoring the alignment of large language models (LLMs) to individuals is a new frontier in generative AI, but unbounded personalization can bring potential harm, such as large-scale profiling, privacy infringement and bias reinforcement. Kirk et al. develop a taxonomy for risks and benefits of personalized LLMs and discuss the need for normative decisions on what are acceptable bounds of personalization.

    • Hannah Rose Kirk
    • Bertie Vidgen
    • Scott A. Hale
    Perspective
  • A classic question in cognitive science is whether learning requires innate, domain-specific inductive biases to solve visual tasks. A recent study trained machine-learning systems on the first-person visual experiences of children to show that visual knowledge can be learned in the absence of innate inductive biases about objects or space.

    • Justin N. Wood
    News & Views
  • An emerging research area in AI is developing multi-agent capabilities with collections of interacting AI systems. Andrea Soltoggio and colleagues develop a vision for combining such approaches with current edge computing technology and lifelong learning advances. The envisioned network of AI agents could quickly learn new tasks in open-ended applications, with individual AI agents independently learning and contributing to and benefiting from collective knowledge.

    • Andrea Soltoggio
    • Eseoghene Ben-Iwhiwhu
    • Soheil Kolouri
    Perspective
  • As the impacts of AI on everyday life increase, guidelines are needed to ensure ethical deployment and use of this technology. This is even more pressing for technology that interacts with groups that need special protection, such as children. In this Perspective Wang et al. survey the existing AI ethics guidelines with a focus on children’s issues, and provide suggestions for further development.

    • Ge Wang
    • Jun Zhao
    • Nigel Shadbolt
    Perspective
  • AI tools such as ChatGPT can provide responses to queries on any topic, but can such large language models accurately ‘write’ molecules as output to our specification? Results now show that models trained on general text can be tweaked with small amounts of chemical data to predict molecular properties, or to design molecules based on a target feature.

    • Glen M. Hocky
    News & Views
  • Training a machine learning model with multiple tasks can create more-useful representations and achieve better performance than training models for each task separately. In this Perspective, Allenspach et al. summarize and compare multi-task learning methods for computer-aided drug design.

    • Stephan Allenspach
    • Jan A. Hiss
    • Gisbert Schneider
    Perspective
  • Machine learning algorithms play important roles in medical imaging analysis but can be affected by biases in training data. Jones and colleagues discuss how causal reasoning can be used to better understand and tackle algorithmic bias in medical imaging analysis.

    • Charles Jones
    • Daniel C. Castro
    • Ben Glocker
    Perspective
  • Recent work has demonstrated important parallels between human visual representations and those found in deep neural networks. A new study comparing functional MRI data to deep neural network models highlights factors that may determine this similarity.

    • Katja Seeliger
    • Martin N. Hebart
    News & Views
  • Machine learning is increasingly applied for disease diagnostics due to its ability to discover differentiating features in data. However, the clinical applicability of these models remains a challenge. Pavlović et al. provide an overview of the challenges in using machine learning for biomarker discovery and suggest a causal perspective as a solution.

    • Milena Pavlović
    • Ghadi S. Al Hajj
    • Geir K. Sandve
    Perspective