The reviewer should keep the researcher's preprint document as confidential data (Generative AI Policies for Journals, n.d.). This not only protects the researcher's identity, email, and other important information, but also prevents AI from reading the document and using it as part of its knowledge base.
AI tools could read the documents and use them as a knowledge base for other users |
This confidentiality requirement extends to the peer review report, as it may contain confidential information about the manuscript and/or the authors. For this reason, reviewers should not upload their peer review report into an AI tool, even if it is just for the purpose of improving language and readability (Generative AI Policies for Journals, n.d.).
Many possibilities could arise when you upload a researcher's manuscript or preprint document to AI tools. These tools could learn from the document and potentially use it in responses to other users. This could resemble plagiarism, which can occur as a result of generative AI tools.
Reference:
Generative AI policies for journals. (n.d.). www.elsevier.com. https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals
Comments
Post a Comment