Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Comment
  • Published:

How to write effective prompts for large language models

Effectively engaging with large language models is becoming increasingly vital as they proliferate across research landscapes. This Comment presents a practical guide for understanding their capabilities and limitations, along with strategies for crafting well-structured queries, to extract maximum utility from these artificial intelligence tools.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

References

  1. Lin, Z. Trends Cogn. Sci. 28, 85–88 (2024).

    Article  PubMed  Google Scholar 

  2. Lin, Z. R. Soc. Open Sci. 10, 230658 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  3. Merow, C., Serra-Diaz, J. M., Enquist, B. J. & Wilson, A. M. Nat. Ecol. Evol. 7, 960–962 (2023).

    Article  PubMed  Google Scholar 

  4. Lin, Z. Preprint at arXiv, https://doi.org/10.48550/arXiv.2310.17143 (2023).

  5. Zamfirescu-Pereira, J. D., Wong, R. Y., Hartmann, B. & Yang, Q. Why Johnny can’t prompt: how non-AI experts try (and fail) to design LLM prompts. In Proc. 2023 CHI Conf. Human Factors in Computing Systems (eds. Schmidt, A. et al.) 437 (ACM, 2023).

  6. Brown, T. et al. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33 (eds Larochelle, H. et al.) 1877–1901 (NeurIPS, 2020).

  7. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y. & Iwasawa, Y. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems 35 (eds Koyejo, S. et al.) 22199–22213 (NeurIPS, 2022).

  8. Li, C. et al. Preprint at arXiv, https://doi.org/10.48550/arXiv.2307.11760 (2023).

  9. Yasunaga, M. et al. Preprint at arXiv, https://doi.org/10.48550/arXiv.2310.01714 (2023).

  10. Li, C. et al. Preprint at arXiv, https://doi.org/10.48550/arXiv.2312.04474 (2023).

Download references

Acknowledgements

The writing was supported by the National Key R&D Program of China STI2030 Major Projects (2021ZD0204200), National Natural Science Foundation of China (32071045) and Shenzhen Fundamental Research Program (JCYJ20210324134603010). The funder had no role in the decision to publish or the preparation of this manuscript. I used GPT-4 and Claude 2.0 to proofread the manuscript on the basis of prompts described at: https://psyarxiv.com/9yhwz.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhicheng Lin.

Ethics declarations

Competing interests

The author declares no competing interests.

Peer review

Peer review information

Nature Human Behaviour thanks Lydia Chilton and Yuekang Li for their contribution to the peer review of this work.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lin, Z. How to write effective prompts for large language models. Nat Hum Behav 8, 611–615 (2024). https://doi.org/10.1038/s41562-024-01847-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41562-024-01847-2

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing