Effectively engaging with large language models is becoming increasingly vital as they proliferate across research landscapes. This Comment presents a practical guide for understanding their capabilities and limitations, along with strategies for crafting well-structured queries, to extract maximum utility from these artificial intelligence tools.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on SpringerLink
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
References
Lin, Z. Trends Cogn. Sci. 28, 85–88 (2024).
Lin, Z. R. Soc. Open Sci. 10, 230658 (2023).
Merow, C., Serra-Diaz, J. M., Enquist, B. J. & Wilson, A. M. Nat. Ecol. Evol. 7, 960–962 (2023).
Lin, Z. Preprint at arXiv, https://doi.org/10.48550/arXiv.2310.17143 (2023).
Zamfirescu-Pereira, J. D., Wong, R. Y., Hartmann, B. & Yang, Q. Why Johnny can’t prompt: how non-AI experts try (and fail) to design LLM prompts. In Proc. 2023 CHI Conf. Human Factors in Computing Systems (eds. Schmidt, A. et al.) 437 (ACM, 2023).
Brown, T. et al. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33 (eds Larochelle, H. et al.) 1877–1901 (NeurIPS, 2020).
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y. & Iwasawa, Y. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems 35 (eds Koyejo, S. et al.) 22199–22213 (NeurIPS, 2022).
Li, C. et al. Preprint at arXiv, https://doi.org/10.48550/arXiv.2307.11760 (2023).
Yasunaga, M. et al. Preprint at arXiv, https://doi.org/10.48550/arXiv.2310.01714 (2023).
Li, C. et al. Preprint at arXiv, https://doi.org/10.48550/arXiv.2312.04474 (2023).
Acknowledgements
The writing was supported by the National Key R&D Program of China STI2030 Major Projects (2021ZD0204200), National Natural Science Foundation of China (32071045) and Shenzhen Fundamental Research Program (JCYJ20210324134603010). The funder had no role in the decision to publish or the preparation of this manuscript. I used GPT-4 and Claude 2.0 to proofread the manuscript on the basis of prompts described at: https://psyarxiv.com/9yhwz.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Peer review
Peer review information
Nature Human Behaviour thanks Lydia Chilton and Yuekang Li for their contribution to the peer review of this work.
Rights and permissions
About this article
Cite this article
Lin, Z. How to write effective prompts for large language models. Nat Hum Behav 8, 611–615 (2024). https://doi.org/10.1038/s41562-024-01847-2
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41562-024-01847-2
This article is cited by
-
Techniques for supercharging academic writing with generative AI
Nature Biomedical Engineering (2024)