Generative AI Policy

This policy is based on and refers to the guidelines outlined in the Generative AI Policies for Journals, as provided by:

STM : Recommendations for classifying AI use in academic manuscript preparation
Elsevier : The application of generative AI and AI-assisted technologies within the review process
WAME : Chatbots, generative AI, and scholarly manuscripts

The Cahaya Ilmu Cendekia Publisher understands the importance of artificial intelligence (AI) and its potential to help authors with their research and writing processes. Generative Artificial Intelligence (AI) tools, such as large language models (LLMs) or multimodal models, continue to develop and evolve, including in their application for businesses and consumers. Cahaya Ilmu Cendekia Publisher is excited about the new possibilities that generative AI tools bring, especially for generating ideas, speeding up research, analyzing results, improving writing, organizing submissions, assisting authors who write in a second language, and speeding up the research and sharing process.  Cahaya Ilmu Cendekia Publisher is offering guidance to authors, editors, and reviewers on the use of such tools, which may evolve as the AI field continues to develop.

The Cahaya Ilmu Cendekia Publisher is offering guidance to authors, editors, and reviewers on the use of such tools, which may evolve as the AI field continues to develop. Generative AI tools can produce diverse forms of content, spanning text generation, image synthesis, audio, and synthetic data. Some examples include ChatGPT, Copilot, Gemini, Claude, NovelAI, Jasper AI, DALL-E, Midjourney, Runway, etc. While Generative AI has immense potential to enhance creativity for authors, there are certain risks associated with the current generation of tools.
Some of the risks associated with the way Generative AI tools work today are: 

  1. Inaccuracy and bias: Generative AI tools are statistical in nature (as opposed to factual) and, as such, can introduce inaccuracies, falsities (so-called hallucinations), or bias, which can be hard to detect, verify, and correct. 
  2. Lack of attribution: Generative AI is often lacking the standard practice of the global scholarly community of correctly and precisely attributing ideas, quotes, or citations.
  3. Confidentiality and Intellectual Property Risks: At present, Generative AI tools are often used on third-party platforms that may not offer sufficient standards of confidentiality, data security, or copyright protection.
  4. Unintended uses: Generative AI providers may reuse the input or output data from user interactions (e.g., for AI training). This practice could potentially infringe on the rights of authors and publishers, amongst others. 

This policy outlines the journal's stance on the ethical and responsible use of Artificial Intelligence {AI} and AI-assisted technologies in the preparation of manuscripts submitted for publication. This policy aims to ensure transparency, accountability, and the integrity of the scientific record.

Authorship and Accountability

  • AI cannot be an Author: AI tools and AI-assisted technologies {e.g., Large Language Models, Generative AI} do not meet the criteria for authorship as they cannot take responsibility for the content, integrity, or originality of the work. Therefore, AI tools or software cannot be listed as authors on any submitted manuscript.
  • Authors' Full Responsibility: Authors remain fully responsible and accountable for the entire content of their submitted manuscript, including any parts generated, edited, or enhanced by AI tools. This includes the accuracy, integrity, originality, and ethical soundness of the work. Authors must verify the factual correctness of any statements, citations, data, or figures generated by AI.
  • Human Oversight Required: The use of AI tools must be under direct human supervision. Authors must critically evaluate, edit, and revise any material generated by AI to ensure it aligns with scientific standards, accuracy, and ethical guidelines.

Transparency and Disclosure

  • Mandatory Disclosure: Authors are required to disclose the use of AI and AI-assisted technologies in the preparation of their manuscript. This disclosure must be explicit, specific, and transparent.

  • What to Disclose: The disclosure should include:

    • The name of the AI tool{s} used: e.g., ChatGPT {OpenAI}, Bard {Google}, Grammarly, GPT-4, Midjourney, etc.
    • The specific purpose{s} for which the AI tool was used: e.g., language refinement, grammar check, drafting of specific sections {specify which sections}, brainstorming, data analysis assistance, image generation, etc.
    • The extent of AI involvement: A brief description of how the AI tool contributed to the manuscript.
  • Where to Disclose: This disclosure should typically be included in one of the following sections:

    • Acknowledgements section: {Preferred for general writing assistance}
    • Methods section: {If AI was used for specific methodological steps, e.g., data analysis or coding assistance}
    • A dedicated "Declaration of AI Use" statement just before the References section or in a footnote on the title page.

    Example Disclosure Statement: "Portions of this manuscript were drafted/edited/enhanced using [Name of AI tool, e.g., ChatGPT-4 {OpenAI}]. The authors used this tool for [specific purpose, e.g., improving grammar and clarity/drafting an initial version of the Introduction section]. All content generated by the AI was thoroughly reviewed, edited, and validated by the authors, who take full responsibility for the final content." Or for image generation: "Figure X was generated with the assistance of [Name of AI tool, e.g., Midjourney v5]. The authors provided the prompts and edited the output to ensure accuracy and relevance."

Permissible Uses {With Disclosure}
AI tools may be used to assist authors in the following ways, provided full disclosure is made:

  • Language and Grammar Refinement: Improving readability, spelling, grammar, and sentence structure.
  • Drafting Support: Assisting in the generation of initial drafts of specific, non-research-critical sections {e.g., parts of the Introduction or Discussion for stylistic purposes}, which must then be thoroughly reviewed and revised by the authors.
  • Brainstorming and Idea Generation: Assisting in conceptualizing ideas or outlining the structure of the manuscript.
  • Data Analysis and Visualization Assistance: {Only if verified and reproducible by human authors}. If AI is used in data processing, analysis, or generating figures/tables, the specific methods, tools, and validation steps must be clearly described in the Methods section.
  • Summarization of Literature: Aiding in summarizing existing literature, but the authors must ensure the accuracy of the summary and proper citation of original sources.
  • Verify AI-generated content: Authors should carefully review and validate any content generated or suggested by AI robots. The responsibility for the accuracy and quality of the manuscript remains with the authors.
  • Reference management: AI tools can help manage citations, format references, and create bibliography lists.

Prohibited Uses
The use of AI and AI-assisted technologies is strictly prohibited for:

  • Generating Fictitious Content: Creating false data, fabricated research results, or non-existent references/citations.
  • Plagiarism: Using AI-generated content without proper attribution {i.e., treating it as original work when it is not fully human-generated or verified}. All AI-generated content must be treated as any other source and properly attributed if it relies on existing intellectual property or specific datasets.
  • Substituting for Human Intellectual Contribution: AI cannot perform the core intellectual work of research, such as formulating original hypotheses, designing experiments, interpreting novel findings, or drawing original conclusions.
  • Violating Confidentiality: Reviewers and editors are strictly prohibited from using AI tools with confidential manuscript content {e.g., uploading the manuscript to publicly available AI models}, as this may breach confidentiality, copyright, and the integrity of the peer-review process.
  • Misrepresenting Research: Using AI to intentionally mislead readers about the methods, results, or conclusions of the research.
  • Content creation: AI robots should not be used to generate the primary content of the manuscript, including the introduction, methodology, results, and conclusion sections. Authors are expected to contribute original ideas and intellectual input to the manuscript.
  • Ethical considerations: AI robots should not be used to manipulate or fabricate data, create fraudulent results, or engage in any unethical practices related to manuscript preparation.

Ethical Considerations
Authors must adhere to ethical guidelines and academic integrity standards while using AI robots in manuscript preparation. Any instances of AI-based content generation that violate copyright, plagiarize others' work, or engage in unethical practices will be considered a breach of publication ethics.

Editorial Review
Manuscripts submitted to Pandawa Institute Journal, whether AI-assisted or not, will undergo the same rigorous peer-review process to assess their scientific merit, originality, and adherence to ethical standards.

Review of the Policy
This policy on the use of AI robots in manuscript writing may be periodically reviewed and updated to reflect the evolving landscape of AI technology and to align with best practices in scholarly publishing.

Consequences of Misuse
Failure to adhere to this policy regarding the ethical use of AI and AI-assisted technologies will be considered a serious breach of publication ethics. Such breaches may result in:

  • Rejection of the submitted manuscript.
  • Retraction of the published article.
  • Banning of the author (s) from future submissions to the journal.
  • Notification to the authors' institution and relevant ethics committees

This policy will be periodically reviewed and updated to reflect advancements in AI technology and evolving ethical guidelines in scholarly publishing.