AI Tools Usage Policy

Artificial Intelligence (AI) Policy

1. Introduction

The EMITTER International Journal of Engineering Technology acknowledges the transformative role of Artificial Intelligence (AI) in scholarly communication. AI tools can assist authors in enhancing writing quality, analyzing data, and automating certain editorial processes. However, their use must be managed carefully to ensure integrity, originality, transparency, and accountability. This policy defines EMITTER’s position on ethical and responsible AI usage in manuscript preparation and aligns with the Committee on Publication Ethics (COPE) Core Practices and its guidance on AI in decision-making.

2. Definition of AI Tools

For this policy, AI tools refer to digital systems or platforms that apply artificial intelligence methods—such as machine learning (ML), natural language processing (NLP), or deep learning—to generate, analyze, translate, summarize, or modify text, data, images, or audio. Examples include generative AI systems (e.g., ChatGPT, Gemini, Claude), writing aids (e.g., Grammarly, DeepL Write, QuillBot), AI-driven data analysis and visualization tools, software for automated image or graph creation, and AI-based literature or citation managers.

3. Acceptable Use of AI Tools

Authors may use AI tools for limited and transparent purposes. Permitted uses include grammar, spelling, and punctuation correction; improving clarity, tone, and structure; formatting references and citations; preliminary literature searches; assisting, but not replacing, statistical analysis or visualization; and creating basic illustrations, if ethically sourced and reviewed. AI tools must not be used to generate substantial portions of the manuscript, fabricate or falsify data, or paraphrase works without human verification. A maximum of 20% AI-generated content (as identified by AI detection software) is acceptable. Authors must revise manuscripts exceeding this threshold.

4. Responsibilities of Authors

Authors bear full responsibility for all manuscript content, including any text or materials produced or modified with AI tools. They must verify accuracy, originality, and reliability; ensure the absence of plagiarism, bias, or hallucination; acknowledge all external data or materials used by AI; and take full accountability for ethical or factual errors resulting from AI use. All AI-generated outputs must be critically reviewed and edited to meet disciplinary and scholarly standards.

5. Authorship and AI

AI tools cannot be credited as authors or co-authors. Authorship is restricted to individuals who have made significant intellectual contributions and can take responsibility for the work’s content. Listing AI tools in author names or contribution statements is prohibited and may lead to rejection or retraction.

6. Disclosure Requirements

If AI tools are used beyond basic language or formatting support, authors must disclose this information clearly. Disclosure should include the name, version, and provider of the AI tool, along with a description of its purpose and extent of use, and a statement confirming that the authors reviewed and are responsible for all AI-assisted content.

7. Placement of Disclosure

Depending on the role of AI tools, disclosure must appear in one or more of the following sections:

  1. a) Methods – if used for data analysis, coding, or figure creation;
  2. b) Acknowledgments – if used for language or formatting assistance;
  3. c) Dedicated statement titled 'Declaration of AI Tool Usage,'

for example: “During the preparation of this manuscript, the authors used [tool name, version, developer] for [purpose].

All outputs were reviewed and verified by the authors to ensure accuracy and integrity.”

8. Editorial and Peer Review Oversight

Editors and reviewers will consider AI disclosures as part of ethical evaluation. If inappropriate or undisclosed AI use is suspected, the editorial office may request clarification, reject the manuscript, or notify the author’s institution. AI-detection tools may be used, but all assessments will include human oversight.

9. Consequences of Non-Compliance

Failure to comply with this policy may result in rejection at any review stage, retraction after publication, notification to the author’s institution, or bans on future submissions in cases of repeated or severe misuse.

10. Appeals

Authors may appeal editorial decisions related to AI use by submitting a formal written request to the Editor-in-Chief. Appeals must include evidence and reasoning. The case will be reviewed by the editorial ethics committee or referred to COPE if necessary.

11. Editorial AI Use

EMITTER does not use AI tools for autonomous editorial or peer review decisions. Any future AI applications by the editorial team will be disclosed transparently and verified by human editors.

12. Policy Review and Updates

This policy will be reviewed periodically to reflect evolving AI technologies and ethical standards. Authors should consult the latest version before submission and contact the editorial office for clarification.

13. Ethical Framework

This policy follows ethical principles and guidance issued by the Committee on Publication Ethics (COPE), including COPE Core Practices, the Discussion Document on AI in Decision-Making, and Guidelines on Authorship and Retraction. Compliance with these standards is required for publication in EMITTER.