AIGC Policy

In response to the increasing use of generative AI and AI-assisted technologies in scientific writing, Cultural Arts Research and Development has established the following policy. Please note that this policy pertains specifically to the writing process, and does not apply to the legitimate use of AI tools for data analysis or deriving insights as part of the research methodology. This policy is intended to uphold research integrity, transparency, and ethical standards throughout the editorial and publication process. It applies to all stakeholders, including authors, reviewers, and editors.

Use of Generative AI by Authors

If generative AI tools are used to assist in manuscript preparation—such as for language improvement, translation, summarization, or content generation—the use must be explicitly disclosed in the manuscript, and the output must be independently reviewed and verified by the authors.

Regardless of whether AI tools are used, authors remain fully responsible for the content of their submissions, including factual accuracy, originality, and compliance with ethical standards. Generative AI tools must not replace the author’s academic judgment or contribution.

AI tools must not be credited as authors or co-authors. Such tools do not meet authorship criteria and cannot assume legal or ethical responsibility.

Use of AI Tools in Peer Review

Reviewers must not input confidential manuscript content into any generative AI platform. Doing so may violate confidentiality agreements and risk unauthorized disclosure of unpublished information.

If reviewers choose to use AI tools to enhance the clarity or structure of their review reports, they must ensure that:

  • No confidential or unpublished content is shared with the AI platform.
  • All evaluations and recommendations are made independently by the reviewer—not generated by AI.  

Use of AI Tools by Editors

Editors may use AI tools for non-decision-making tasks, such as summarizing reviewer comments or checking grammar in communication. However, editors must avoid relying on AI for critical decisions and must not expose unpublished manuscript content to external platforms.

The editorial team will actively monitor for inappropriate or unethical use of AI, including the submission of fully or partially AI-generated manuscripts without proper disclosure. Appropriate screening measures will be used to detect potential misuse or fraud.

Breach of Policy

Any undisclosed or unethical use of generative AI by authors, reviewers, or editors may result in:

  • Requests for correction or retraction of the article
  • Notification of the author’s institution or funding agency
  • Rejection of the manuscript and a ban on future submissions