AIGC Policy
Cultural Arts Research and Development recognizes the growing use of generative artificial intelligence (AI) and AI-assisted technologies in academic research and scholarly communication. The journal is committed to ensuring that the use of such technologies is transparent, responsible, and consistent with the principles of research integrity.
This policy applies to all participants in the publication process, including authors, reviewers, and editors. It addresses the use of AI tools in the preparation, evaluation, and editorial handling of manuscripts.
This policy specifically concerns the use of generative AI in the writing and editorial process. It does not restrict the legitimate use of AI tools as part of the research methodology, such as for data analysis, modeling, or computational research, provided that such use is appropriately described in the Methods section of the manuscript.
Use of Generative AI by Authors
Authors may use generative AI or AI-assisted technologies to support the preparation of manuscripts for purposes such as language editing, grammar correction, translation, or improving clarity. However, such use must be conducted responsibly and transparently.
If generative AI tools are used during manuscript preparation, authors must:
- Disclose the use of AI tools in the manuscript, typically in the Acknowledgements section or a dedicated disclosure statement.
- Carefully review and verify all AI-generated content to ensure accuracy, originality, and scholarly integrity.
- Ensure that the use of AI does not replace the authors’ intellectual contributions or academic judgment.
Authors remain fully responsible for the content of their manuscripts, including the accuracy of the information presented, the originality of the work, and compliance with the journal’s ethical standards.
Generative AI tools cannot be listed as authors or co-authors, as they are unable to assume responsibility for the work, approve the final manuscript, or meet the authorship criteria established by the International Committee of Medical Journal Editors (ICMJE).
Use of AI in Research Methods
The journal recognizes that AI-based tools may be legitimately used within research workflows, including for data processing, statistical modeling, computational analysis, or digital humanities research.
When AI tools are used as part of the research methodology, authors should:
- Clearly describe the AI tools or algorithms used;
- Explain how the tools contributed to the analysis or interpretation of the data;
- Provide sufficient methodological details to ensure transparency and reproducibility where applicable.
Such use is considered part of the research methodology rather than the writing process and should be reported accordingly in the manuscript.
Use of AI Tools in Peer Review
Peer review relies on confidentiality, independent judgment, and scholarly expertise. Reviewers must therefore exercise caution when using AI tools.
Reviewers must not upload or share confidential manuscript content with generative AI platforms or external systems that may store, process, or reuse the information. Doing so could violate the confidentiality obligations of peer review and risk unauthorized disclosure of unpublished research.
If reviewers use AI tools to assist with improving the clarity or organization of their review reports, they must ensure that:
- No confidential or unpublished manuscript content is shared with external AI systems;
- All scientific evaluations, interpretations, and recommendations are made independently by the reviewer.
Reviewers remain solely responsible for the content and conclusions of their peer review reports.
Use of AI Tools by Editors
Editors may use AI-assisted technologies for limited administrative or editorial support tasks, such as improving the clarity of editorial communications or summarizing reviewer comments.
However, editors must ensure that:
- Editorial decisions are made solely by human editors based on scholarly judgment;
- Confidential manuscript content is not shared with external AI systems in ways that may compromise confidentiality.
The editorial team may use appropriate screening tools and editorial checks to identify potential misuse of generative AI or suspicious patterns in submitted manuscripts.
Any misuse or undisclosed use of generative AI technologies that compromises research integrity may be considered a violation of the journal’s publication ethics policies. When concerns arise, the Editorial Office may conduct an assessment and, where necessary, request clarification or additional information from the authors. If inappropriate use of AI is confirmed, the journal may take appropriate editorial action in accordance with its ethical guidelines and established procedures. Depending on the circumstances and the stage of the publication process, such actions may include manuscript rejection, publication corrections, expressions of concern, or retraction of the article. Investigations will be conducted in accordance with the principles and recommendations of the Committee on Publication Ethics (COPE).
Submit Manuscript
