Skip to main content

Policy framework for the utilization of generative AI

The ascent of generative artificial intelligence (AI) technologies has engendered a pivotal juncture in policy formulation, necessitating a comprehensive framework to govern their deployment [1,2,3]. Here, we summarized several the use of guidelines for generative AI from different institutions. In May 2023, the International Committee of Medical Journal Editors (ICMJE) issued updated recommendations stipulating that journals should require authors to disclose the use of AI-assisted technologies during the manuscript writing process, such as large language models, chatbots, or image generation tools. Authors utilizing these technologies are expected to elucidate their usage within the cover letter and manuscript. Moreover, authors are prohibited from attributing chatbots (e.g., ChatGPT), AI, or AI-assisted technologies as authors or co-authors. It is imperative that authors ensure the absence of plagiarism in their manuscripts, encompassing both text and images generated by AI. Reviewers are advised against uploading manuscripts to software platforms or other AI technologies incapable of ensuring confidentiality. Furthermore, reviewers should disclose to journals whether and how AI technologies were utilized in the evaluation of manuscripts or the drafting of reviewer comments. The World Association of Medical Editors (WAME) has issued the following recommendations regarding the use of ChatGPT and chatbots in academic publishing: chatbots could not be listed as authors. Authors should maintain transparency when utilizing chatbots and provide information on how they were used. Authors are responsible for the work done by chatbots in the manuscript (including the accuracy and non-plagiarism of the presented content) and must acknowledge all sources, including materials generated by chatbots. Journal editors need appropriate tools to help detect content generated or modified by AI. In addition, On December 21, 2023, the Chinese Ministry of Science and Technology issued regulations on responsible research conduct, stating that: Directly generating funding application materials using generative AI is prohibited. Content generated using generative AI, especially involving key factual and opinion-based content, must be clearly labeled and the generation process explained to ensure authenticity, accuracy, and respect for intellectual property rights. Content marked as generated by AI by other authors should generally not be cited as original literature. Unverified references generated by generative AI are prohibited from direct use. Generative AI cannot be listed as a co-contributor to achievements.

In addition to the guidance of the above-mentioned agencies, recently, one study published by BMJ examined whether the top 100 academic publishers and the 100 most cited journals have established guidelines for the utilization of generative AI tools [4]. Among the top 100 publishers, 24 (24%) have issued directives regarding the use of GAI. Notably, 87% of the surveyed journals provided guidelines for generative AI employment, with 96% of publishers and 98% of journals explicitly prohibiting the attribution of authorship to generative AI systems. Regarding the disclosure of generative AI implementation, 75% of publishers and 40% of journals specified the nature of disclosure, albeit with varying placement, encompassing sections such as research methodology, acknowledgments, cover letters, or other segments. It is evident that the recommendations offered by major publishers and journals regarding generative AI usage lack structured consensus or guidelines, thereby manifesting significant heterogeneity and conflicting guidance. This underscores the pressing need for interdisciplinary guidelines pertaining to the utilization of generative AI.

Availability of data and materials

The raw data of the current study are available from the corresponding author on reasonable request.

References

  1. Noy S, Zhang W. Experimental evidence on the productivity effects of generative artificial intelligence. Science. 2023;381(6654):187–92.

    Article  CAS  PubMed  Google Scholar 

  2. Extance A. ChatGPT has entered the classroom: how LLMs could transform education. Nature. 2023;623(7987):474–7.

    Article  CAS  PubMed  Google Scholar 

  3. Minssen T, Vayena E, Cohen IG. The challenges for regulating medical use of chatgpt and other large language models. JAMA. 2023;330(4):315–6.

    Article  PubMed  Google Scholar 

  4. Ganjavi C, Eppler MB, Pekcan A, et al. Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis. BMJ. 2024;384: e077192.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

The authors thank “home-for-researchers (www.home-for-researchers.com)” for their effort in polishing the English content of this manuscript.

Funding

None.

Author information

Authors and Affiliations

Authors

Contributions

CK and WH designed the study. CK collected the data. CK and WH analyzed the data and drafted the manuscript. WH revised and approved the final version of the manuscript. All authors read and approved the submitted version.

Corresponding author

Correspondence to Haiyang Wu.

Ethics declarations

Ethical approval

Ethics approval was not required for this study.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cheng, K., Wu, H. Policy framework for the utilization of generative AI. Crit Care 28, 128 (2024). https://0-doi-org.brum.beds.ac.uk/10.1186/s13054-024-04917-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s13054-024-04917-z