Translate this page into:
An eye on Artificial intelligence (AI) from the Editor’s perspective
Corresponding author: Dr. Feroze Kaliyadan, Department of Dermatology, Sree Narayana Institute of Medical Sciences, Ernakulam, India. ferozkal@hotmail.com
-
Received: ,
Accepted: ,
How to cite this article: Kaliyadan F, Singal A. An eye on Artificial intelligence (AI) from the Editor’s perspective. Indian J Dermatol Venereol Leprol. 2025;91:285-6. doi: 10.25259/IJDVL_574_2025
The classical test to evaluate if a machine can think intelligently like a human being was the Turing test, designed by Alan Turing in 1950. A typical format involves three participants: a human, a machine, and a human judge. The basic principle is that if the machine responds to the judge’s question in a way that the judge cannot distinguish it from a human participant, it has passed the Turing test. It is debatable if the advent of advanced Generative Artificial Intelligence (Gen AI) models like ChatGPT, Google Gemini, or the newer ones like DeepSeek, has rendered the Turing test redundant, because now AI responds to reason and can also create like a human being. There are, in fact, tests like the Lovelace 2.0 that can assess the creativity of AI-based machine models.1
In the context of clinical research, Gen AI has introduced incredible possibilities -from making clerical work related to research easier, to automation of processes related to systematic reviews/meta-analysis and ease of analysing big data. From the point of view of a journal, the key concerns would be
-
Was AI used in the study or the preparation of the manuscript?
-
How exactly was it used?
-
Has it been used ethically?
-
Has the AI-generated material been cross-checked
The reference guidelines that journals can follow are the International Committee of Medical Journal Editors (ICMJE) recommendations, which, in case, of use of AI in publications, emphasise a few points- ICMJE | Recommendations | Defining the Role of Authors and Contributors.2
Journals should give instructions (and authors should comply with the same) to give full disclosure regarding if/how AI was used in their work. For example, even the use of AI tools for simple things like editing the background of images should be disclosed, ideally in the corresponding section of the main manuscript (for example, if AI was used for image editing, mention this in the legend). Details of the use of AI should be mentioned in the cover letter and main manuscript. Full disclosure enables the journal to make informed decisions regarding the content on a case-to-case basis
Generative AI tools should not be listed among authors, as they cannot be accountable for ethical aspects as well as accuracy (which is essential for authorship)
It is the responsibility of the author(s) to ensure that any AI output used in their work is thoroughly cross-checked and to take full responsibility for the material, including AI-generated work. Ensure especially that all work is appropriately cited
There are concerns regarding processes in case of inadequate disclosure. Does this warrant punitive action, akin to plagiarism and other scientific misconduct? Should this be dealt with on a case-to-case basis? Clearer guidelines on this aspect need to be formulated.
There is also doubt regarding what needs to be disclosed about the usage of generative AI in the manuscript. It would be good practice to save and share prompt links to allow reviewers and readers to cross-check the output.
Peer review using AI tools is another area of concern. There are many tools that offer quick summaries for documents. While getting an AI-based summary itself is unethical, it should not replace a detailed appraisal by the reviewers. Again, the key point is that AI is not always correct. Reviewers are ethically bound to go through the review process diligently to ensure fairness. However, AI can indeed make work easier for the reviewing team in aspects like grammar and data accuracy (especially in tables)
Checking for AI-generated material is essentially the editors’/publications’ responsibility. Like plagiarism detectors, there are many tools to detect AI-generated text and images. However, the use of AI goes beyond text/image generation. The whole literature review process, data cleaning/analysis, and referencing process can also be automated to a large extent using AI. Reference managers have incorporated AI. It would be difficult to detect the use of AI in all these situations conclusively. Diligence and full disclosure, again, by the authors, is therefore key to the ethical use of AI in research.
The future?
There are doubts regarding AI’s future predictions of AI! It can be reasonably expected that the accuracy of output would increase. Issues like AI hallucinations and fake reference generations will become virtually nil in the future. Entire portions of research methodology – especially literature review and data analysis will become automated. AI could become a valuable tool for editorial screening of submitted articles, thereby saving a lot of time.
AI can definitely make the research process easier at all levels, for all stakeholders, and in the long run, not using AI effectively can actually put researchers at a disadvantage. What will not change, however, is that ultimately a human would still need to cross-check AI output at every level.
References
- A turing test of whether AI chatbots are behaviorally similar to humans. Proc Natl Acad Sci U S A. 2024;121:e2313925121.
- [CrossRef] [PubMed] [PubMed Central] [Google Scholar]
- ICMJE. Up-Dated ICMJE Recommendatios, May 2023. Available from: https://www.icmje.org/news-and-editorials/updated_recommendations_may2023.html. Last accessed on 2025 Mar 16.