Policy on the Use of Generative AI

POLICY ON THE USE OF GENERATIVE AI

The Revista Eletrônica Direito e Política (RDP) constantly seeks to improve its publication policies. With the advancement of technology, especially in the field of Artificial Intelligence, it has become necessary to adopt specific guidelines for the use of generative AI in scientific publishing.
For this purpose, RDP has adopted the recommendations of the World Association of Medical Editors (WAME), making the necessary adaptations to meet the specificities of the legal field. WAME guidelines are widely used and adapted globally, being incorporated by several high-impact scientific journals.

The recommendations adopted by RDP are:

Recommendation 1: Chatbots cannot be authors. Some journals have started publishing articles in which chatbots, such as Bard, Bing, and ChatGPT, were used, with cases even listing these tools as co-authors. However, the legal status of authorship varies among countries and, in most jurisdictions, it requires that the author be a natural person. Chatbots cannot provide “final approval of the version to be published” nor “take responsibility for all aspects of the work, ensuring that questions related to the accuracy or integrity of any part are properly investigated and resolved.” Likewise, AI tools cannot “understand” a conflict-of-interest statement and lack legitimacy to sign it. Furthermore, they have no independent affiliation apart from their developers. Considering that authors submitting a manuscript must ensure that all listed authors fully meet authorship criteria, it follows that chatbots cannot be recognized or included as authors.

Recommendation 2: Authors must be transparent when using chatbots and provide information about how they were used. The extent and type of chatbot use in journal publications must be indicated. In addition, authors must acknowledge assistance in writing and provide, in the methods section, detailed information on how the study was conducted and the results generated.

Recommendation 2.1: Authors submitting an article in which a chatbot/AI was used to draft new text must disclose such use in the acknowledgments; all prompts used to generate new text or to convert text or text prompts into tables or illustrations must be specified.

Recommendation 2.2: When an AI tool, such as a chatbot, is used to perform or generate analytical work, assist in reporting results (e.g., generating tables or figures), or write computer code, this must be declared in the body of the article, both in the Abstract and in the Methods section. To allow scientific scrutiny, including replication and identification of falsifications, the full prompt used to generate the research results, the time and date of the query, the AI tool used, and its version must be provided.

Recommendation 3: Authors are responsible for the material provided by a chatbot in their article (including the accuracy of what is presented and the absence of plagiarism) and for the proper attribution of all sources (including original sources for material generated by the chatbot). Authors of articles developed with the aid of chatbots are fully responsible for the material generated by these tools, including its accuracy. It is up to the author to ensure that the content reflects their own data and ideas, does not constitute plagiarism, fabrication, or falsification, and therefore does not amount to potential scientific misconduct, regardless of how the text was produced. Likewise, authors must ensure that all cited material is properly attributed, with complete references, and that the indicated sources support the claims presented by the chatbot. Considering that these tools may be designed to omit sources contrary to the expressed views, it is the authors’ duty to locate, analyze, and include such counterarguments in their work (noting that biases may also occur in human authorship). In addition, authors must clearly identify the chatbot used, as well as the prompt employed, specifying the measures adopted to mitigate plagiarism risk, provide a balanced view, and ensure the accuracy of all references presented.

Recommendation 4: Editors and reviewers must specify to authors and among themselves any use of chatbots in manuscript evaluation and in generating reviews and correspondence. If they use chatbots in their communications with authors or among themselves, they must explain how they were used. Editors and reviewers are responsible for any content and citations generated by a chatbot. They must be aware that chatbots retain the information provided to them, including manuscript content, and that supplying an author’s manuscript to a chatbot violates the confidentiality of the submitted manuscript.

Recommendations for the use of Generative AI may be updated at any time, and all registered users of the journal will be informed.