Translate this page into:
ChatGPT and academic publishing: Potential and perils
Corresponding author: Dr. Keshavamurthy Vinay, Department of Dermatology, Venereology and Leprology, Post Graduate Institute of Medical Education and Research, Chandigarh, India. vinay.keshavmurthy@gmail.com
-
Received: ,
Accepted: ,
How to cite this article: Mehta H, Bishnoi A, Reddy A, Vinay K. ChatGPT and academic publishing: Potential and perils. Indian J Dermatol Venereol Leprol. 2024;90:849. doi: 10.25259/IJDVL_533_2023
Dear Editor,
ChatGPT (Chat Generative Pre-Trained Transformer), is a recently launched for-profit artificial intelligence (AI) technology developed by OpenAI Incorporated (California, United States). In this watershed moment of technological advancement, the medical community is poised to be significantly impacted and transformed by this AI revolution. Potential applications of AI in medicine include disease diagnosis, risk assessment, precision medicine, drug discoveries, electronic health record maintenance, robotic surgery and academic publishing.
In the field of academic publishing, ChatGPT has already been utilised in identifying literature gaps, summarising literature, drafting and editing manuscripts (Supplementary material), preparing patient information sheets and performing statistical analysis.1 ChatGPT can assist users in various aspects of manuscript creation, including idea brainstorming, manuscript outlining and structuring, fact-checking, providing language assistance and feedback and helping overcome writer’s block (Supplementary material). Potential applications include designing clinical trials, writing complete manuscripts, conducting peer review, and aiding editorial decisions.
Although the potentials are immense, the technology is not without its perils [Table 1]. While ChatGPT has demonstrated impressive abilities in generating fluent and seemingly rational text, it is important to remember that it relies on the data it was trained on and may not always produce accurate or reliable information. It is improbable that the programme has access to all available literature in the field, leading to often incorrect and imaginary references that have been an issue with the AI-generated manuscripts.2 Moreover, the algorithm lacks the ability to discern the credibility of the sources it uses to generate responses. In the context of medical publishing, this could result in the spread of misinformation.3 Therefore, AI-generated responses should be carefully evaluated before relying on them for medical decision-making or dissemination of medical information.
Potentials of ChatGPT | Perils of ChatGPT |
---|---|
|
|
It is apparent that the emergence of AI technology has presented ethical dilemmas, which the medical community is largely unprepared to handle. The accreditation of ChatGPT as an author in publications has sparked a heated debate within the medical community.4 There is a general consensus that ChatGPT cannot be considered a legitimate author, as it lacks accountability for the content it generates.5 Completely banning AI-based algorithms from academic publishing is neither feasible nor sensible, as it would deprive the medical community of valuable and constructive tools. Furthermore, at present, there are no reliable tools available to differentiate between AI-generated and human-generated text, which makes it difficult to completely exclude AI-generated text from medical writing. Therefore, it is essential to establish clear ground rules for the use of AI in academic publishing to ensure that the benefits of AI language models are harnessed responsibly and ethically. Transparency, integrity and accountability on authors’ part are vital. One way these aims can be achieved is by obtaining author declarations on the extent of use of AI in manuscript preparation. Rigorous human oversight at every step of publication is paramount for safeguarding the medical literature from the potential for errors, biases, and inaccuracies introduced by AI-generated information.
Intellectual property rights pose another challenge, as it is currently difficult to define who has ownership over AI-generated text. OpenAI’s terms of use specify that users are assigned the all “its right, title and interest in and to output.”6 However, the model also assigns the user the responsibility to ensure that their use of ChatGPT’s responses complies with relevant laws and regulations. Due diligence on the part of authors is critical to determine ownership of AI-authored text to avoid potential legal disputes and to ensure that creators and the authors of the source data are appropriately compensated for. While preparing manuscripts using ChatGPT, the authors may end up giving their own data to the AI algorithm. As per open AI’s privacy policy, certain data from user interaction with the algorithm is retained in their database.7 This may be at risk of being accessed or used without permission. Researchers should hence undertake appropriate measures to protect the privacy and confidentiality of their data, and carefully consider the potential risks and benefits of sharing their data in any form. Patient consent forms should explicitly include permission for the sharing of confidential data across AI platforms.
AI hence has shown immense potential in the field of medical writing, and it is expected to be integrated into the publishing system in the years to come. However, the existing technology is limited by biases, misinformation and inaccuracies. While ChatGPT can serve as a valuable complement to one’s work, it is essential to maintain the central role of human expertise and critical thinking in the formulation of manuscripts. In order to safeguard the sanctity of academic publishing, ethical guidelines to govern the use of these technologies need to be established.
Declaration of patient consent
Patient’s consent is not required as there are no patients in this study.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
Use of artificial intelligence (AI)-assisted technology for manuscript preparation
The authors confirm that they have used artificial intelligence (AI)-assisted technology for assisting in the writing or editing of the manuscript or image creations.
References
- Artificial intelligence-derived dermatology case reports are indistinguishable from those written by humans: A single-blinded observer study. J Am Acad Dermatol. 2023;89:388-90.
- [CrossRef] [PubMed] [Google Scholar]
- Consulting ChatGPT: Ethical dilemmas in language model artificial intelligence. J Am Acad Dermatol 2023 S0190-9622(23)00364-X. doi: 10.1016/j.jaad.2023.02.052. [Epub ahead of print.]
- [Google Scholar]
- ChatGPT listed as author on research papers: Many scientists disapprove. Nature. 2023;613:620-1.
- [CrossRef] [PubMed] [Google Scholar]
- ChatGPT Utility in Healthcare Education, Research, and practise: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare (Basel). 2023;11:887.
- [CrossRef] [PubMed] [PubMed Central] [Google Scholar]
- Terms of Use. [Internet]. Available from: https://openai.com/policies/terms-of-use [Accessed on 8 November 2023].
- Privacy Policy. [Internet]. Available from: https://openai.com/policies/privacy-policy [Accessed on 8 November 2023].