AI Tools Usage Policy
Journal of Midwifery (JOM) supports responsible use of Artificial Intelligence (AI) tools while safeguarding research integrity, participant privacy, and clinical safety. This policy applies to authors, editors, and reviewers and aligns with JOM’s Publication Ethics framework.
1. Introduction
-
AI tools (e.g., large language models, image/audio generators, code assistants, translation and grammar tools) can aid scholarly communication, but they cannot be authors and do not bear responsibility for the work.
-
All AI use must be transparent, ethically compliant, and human-verified. Undisclosed or inappropriate use may lead to rejection, correction, retraction, or other actions under the Misconduct policy.
2. Description
-
Permitted categories (with disclosure): language editing and translation; grammar and clarity improvements; code refactoring; figure redrawing for schematics; drafting outlines or summaries that are subsequently human-edited and verified.
-
Prohibited categories: fabricating or altering data, results, images, references, or peer review; uploading confidential or identifiable information to third-party tools without a lawful basis and permission; generating clinical advice without appropriate validation and oversight.
-
Research use of AI systems: when AI/ML is the object of study (e.g., a triage model), authors must provide sufficient detail for appraisal and, where feasible, reproducibility (see Section 3.4 and Data & Reproducibility policy).
3. Policy
3.1 Authorship and Accountability
-
AI tools cannot be listed as authors or co-authors.
-
Humans remain accountable for all content, including text, data, images, and code influenced by AI.
-
The corresponding author must confirm that AI use (if any) was disclosed and verified by humans.
3.2 Disclosure Requirements for Authors
Include an AI Use Disclosure in the manuscript (Methods, Acknowledgments, or a dedicated “AI Use” section) specifying:
-
Tool name, provider, version, and access date (e.g., “ChatGPT, OpenAI, v.X, accessed 15 Oct 2025”).
-
Purpose and scope (e.g., language polishing; diagram redrawing; code linting).
-
Human verification steps (e.g., fact-checking, re-running analyses, verifying references).
-
Any limitations introduced by the tool and how they were mitigated.
Sample wording (language editing):
“During manuscript preparation, we used ChatGPT (OpenAI, version and access date listed) for grammar and clarity. The authors reviewed and edited all content and are responsible for the final text.”
Sample wording (figure/code):
“Figure 2 was redrawn using an AI-assisted vector tool. Analysis scripts were refactored with [Tool]; all code and outputs were verified and re-run by the authors.”
3.3 Prohibited or Restricted Uses for Authors
-
No fabrication or manipulation of data, images, or references.
-
No confidential/identifiable uploads (e.g., manuscripts under review; patient data; internal peer-review comments) to external AI tools without explicit permission and legal basis. De-identification does not guarantee anonymity for biometric images.
-
No undisclosed ghostwriting by AI.
-
No AI-generated patient images purporting to represent real individuals or clinical findings.
3.4 When AI/ML Is the Subject of the Research
Provide, where feasible:
-
Model description (architecture or source), training/validation data provenance, inclusion/exclusion criteria, and license/terms.
-
Performance metrics, calibration, and bias/fairness assessments across relevant subgroups (e.g., maternal age, parity, socioeconomic indicators).
-
Clinical validation setting, human oversight, and risk controls; do not present experimental decision support as standard of care without appropriate approval and consent.
-
Availability of code/models or executable artifacts, with a Data Availability Statement consistent with ethics approvals.
3.5 Editors and Reviewers
-
Do not upload unpublished manuscripts or confidential reviews to third-party AI tools.
-
Light, local grammar/spell-checking is acceptable if confidentiality is preserved; reviewers remain fully responsible for content.
-
Any AI assistance in a review must be disclosed to the editor (internally) and used only for non-confidential tasks unless explicit permission is granted.
3.6 Image, Audio, and Video Integrity
-
Disclose AI use in image processing (e.g., denoising/redrawing of schematic figures).
-
Clinical/diagnostic images must not be altered in ways that misrepresent findings. Global, non-misleading adjustments are acceptable and should be reported in Methods.
3.7 Citations and References
-
Do not rely on AI-generated references. Authors must verify every citation and DOI. Fictitious or mismatched citations constitute a serious integrity breach.
3.8 Detection and Editorial Checks
-
JOM may use screening tools (text similarity, image forensics, reference audits) to identify potential AI misuse. Results are indicative, not determinative; editorial decisions rely on human assessment.
3.9 Intellectual Property and Licensing
-
Ensure that AI-assisted outputs do not infringe rights and that tool terms of use permit the intended scholarly publication.
-
Third-party content integrated via AI must have clear permissions or independent licensing compatible with the article’s license (see Intellectual Property policy).
3.10 Enforcement
-
Non-compliance may lead to revisions, rejection, corrections, expressions of concern, or retraction, and may be referred to institutions/funders under the Misconduct policy.
4. Technicalities to Achieve and Materialise the Policies
4.1 For Authors (Submission Checklist)
-
Add an AI Use Disclosure with tool name, version/date, purpose, and human verification.
-
Confirm no confidential/identifiable data were uploaded to third-party tools (or provide documented permission and lawful basis).
-
Verify all references and numerical results; re-run analyses if AI assisted coding.
-
Ensure data/code availability statements remain valid and consistent with Ethical Oversight and Data & Reproducibility policies.
4.2 For Editors
-
At initial check, verify presence and adequacy of AI Use Disclosure when AI is plausibly used.
-
Where concerns arise (e.g., improbable references, incoherent methods), request clarifications or underlying data/code; escalate to Misconduct procedures if warranted.
4.3 For Reviewers
-
Confirm you will not use external AI tools on confidential text without permission.
-
If any AI aid was used for non-confidential tasks (e.g., grammar), inform the handling editor via a brief note in the review form.
4.4 Reporting and Transparency
-
If substantive AI involvement is discovered post-publication, JOM may publish a Correction to add disclosures or, for severe issues, other notices per the Post-Publication policy.
Related and supporting policies
-
Allegations of Misconduct: https://jom.fk.unand.ac.id/index.php/jom/misconduct
-
Data and Reproducibility: https://jom.fk.unand.ac.id/index.php/jom/data-reproducibility
-
Ethical Oversight: https://jom.fk.unand.ac.id/index.php/jom/ethical-oversight
-
Conflicts of Interest: https://jom.fk.unand.ac.id/index.php/jom/conflicts-of-interest
-
Peer-Review Processes: https://jom.fk.unand.ac.id/index.php/jom/peer-review
-
Intellectual Property: https://jom.fk.unand.ac.id/index.php/jom/intellectual-property
-
Post-Publication Discussions and Corrections: https://jom.fk.unand.ac.id/index.php/jom/post-publication
Contact
Questions about this policy or disclosures: jom@med.unand.ac.id
Back to Publication Ethics main page: https://jom.fk.unand.ac.id/index.php/jom/ethics