AI Policies

IJECE recognizes the significance of artificial intelligence (AI) and machine learning in scholarly publishing. As generative AI tools such as ChatGPT, Gemini, Claude, and others become increasingly accessible, IJECE emphasizes the importance of addressing both their potential advantages and the ethical considerations involved. This policy outlines the journal’s official position on the use of AI tools by authors, reviewers, and editorial staff, ensuring compliance with the standards set by Elsevier and the Committee on Publication Ethics (COPE).

  1. Use of AI Tools by Authors

Authors are permitted to use AI tools to support the preparation of manuscripts, provided such use is transparent, responsible, and ethical. AI tools may be used for tasks such as language editing, grammar improvement, and reference formatting. However, AI must not be used to generate content that substitutes for original scientific thinking or interpretation of results. Crucially, authors remain solely responsible for the content of their work, including any sections developed with the help of AI tools.

In accordance with the COPE Position Statement (2023), AI tools cannot be listed as authors under any circumstances. Authorship implies the capacity for accountability, consent, and intellectual contribution—criteria which AI tools do not fulfill. Therefore, all named authors must be human individuals who meet established authorship requirements.

  1. Disclosure of AI Use

Authors must fully disclose the use of AI tools in their manuscript submissions. This includes, but is not limited to, tools used for text generation, image creation, data analysis, coding assistance, or translation. The disclosure should be placed in the Acknowledgements section of the manuscript and should specify the name of the AI tool, the version used, and the purpose of its application.

For example, authors may include a statement such as:

“The author(s) used OpenAI’s ChatGPT to edit and refine the wording of the Introduction. All outputs were reviewed and verified by the authors.”

or

"During the preparation of this work, the authors used ChatGPT to enhance the clarity of the writing. After using ChatGPT, the authors reviewed and edited the content as needed and took full responsibility for the publication’s content."

Failure to disclose AI usage may be considered a breach of ethical publishing standards and could result in rejection or retraction of the article.

The use of generative AI and AI-assisted tools in figures, images, and artwork

We do not permit the use of Generative AI or AI-assisted tools to create or alter images in submitted manuscripts. This may include enhancing, obscuring, moving, removing, or introducing a specific feature within an image or figure. Adjustments of brightness, contrast, or color balance are acceptable if and as long as they do not obscure or eliminate any information present in the original. Image forensics tools or specialized software might be applied to submitted manuscripts to identify suspected image irregularities.

The only exception is if the use of AI or AI-assisted tools is part of the research design or research methods (such as in AI-assisted imaging approaches to generate or interpret the underlying research data, for example in the field of biomedical imaging). If this is done, such use must be described in a reproducible manner in the methods section. This should include an explanation of how the AI or AI-assisted tools were used in the image creation or alteration process, and the name of the model or tool, version and extension numbers, and manufacturer. Authors should adhere to the AI software’s specific usage policies and ensure correct content attribution. Where applicable, authors could be asked to provide pre-AI-adjusted versions of images and/or the composite raw images used to create the final submitted versions, for editorial assessment.

The use of generative AI or AI-assisted tools in the production of artwork, such as for graphical abstracts, is not permitted. The use of generative AI in the production of cover art may, in some cases, be allowed if the author obtains prior permission from the journal editor and publisher, can demonstrate that all necessary rights have been cleared for the use of the relevant material, and ensures that there is correct content attribution.

  1. Author Responsibility and Accountability

Authors are wholly responsible for the content and integrity of their manuscripts. Even when AI tools are used to assist in writing or other tasks, authors must ensure the accuracy, originality, and appropriateness of the final submission. Authors are expected to verify that AI-generated content does not contain hallucinated references, incorrect scientific claims, biased interpretations, or plagiarized text.

Misuse of AI tools, including submission of entirely or largely AI-generated papers without human oversight or disclosure, will be treated as unethical conduct in accordance with COPE guidelines and IJECE’s editorial policy.

  1. Use of AI in Peer Review

IJECE expects all peer reviewers to conduct their evaluations based on confidentiality, integrity, and scholarly competence. Reviewers must not use AI tools to generate reviews or input manuscript content into AI platforms without explicit authorization, as this may violate confidentiality agreements.

If a reviewer wishes to use AI for non-content tasks (e.g., grammar improvement of their review text), they must ensure that no confidential information is shared and must disclose such use to the editor. The editorial team reserves the right to reject reviews generated with inappropriate use of AI tools.

  1. Editorial Use of AI

Editorial staff at IJECE may utilize AI tools to support non-decision-making tasks such as plagiarism detection, formatting checks, and language editing. However, AI tools will not be used to make acceptance or rejection decisions. All editorial judgments will be made by qualified human editors to ensure accountability and adherence to ethical standards.

  1. Ethical Considerations and Bias Prevention

The use of AI must not compromise ethical integrity, and authors must ensure that AI tools do not introduce bias, misrepresentation, or offensive content. Authors are encouraged to critically assess AI-generated outputs and to avoid over-reliance on such tools, particularly in tasks that require nuanced academic judgment.

  1. Violations and Consequences

Any attempt to misrepresent AI-generated content as original work, fabricate references or data with AI, or fail to disclose the use of AI tools may constitute a breach of publication ethics. IJECE reserves the right to:

  • Reject the manuscript outright
  • Request revisions or corrections
  • Retract the article post-publication
  • Inform the authors’ affiliated institutions, if necessary

All such cases will be investigated following COPE guidelines on misconduct.

  1. Policy Review and Updates

This AI policy will be reviewed regularly and updated as necessary to reflect technological advancements and evolving best practices in academic publishing. IJECE remains committed to supporting responsible innovation while protecting the quality and integrity of scholarly communication.

References

  1. Elsevier (2023). Generative AI Policies for Journals. Retrieved from:
    https://www.elsevier.com/about/policies-and-standards/generative-ai-policies-for-journals
  2. Committee on Publication Ethics (COPE) (2023). Position Statement on Authorship and AI Tools. Retrieved from:
    https://publicationethics.org/guidance/cope-position/authorship-and-ai-tools
  3. Committee on Publication Ethics (COPE) (2024). Discussion Document on AI and Peer Review. Retrieved from:
    https://publicationethics.org/news/cope-publishes-guidance-on-ai-in-peer-review
  4. Committee on Publication Ethics (COPE) (2023). Discussion Paper: Ethical Considerations in the Use of Generative AI in Publishing.
    https://publicationethics.org/topic-discussions/artificial-intelligence-ai-and-fake-papers