Policy on the Use of AI and AI-Enabled Technologies
This policy is based on the principles of Elsevier’s general policy and sets forth the following guidelines aimed at ensuring greater transparency and providing appropriate guidance to authors, reviewers, editors, and readers.
The journal «Preventive Medicine. Theory and Practice» supports the principles of responsible use of artificial intelligence. This policy applies only to the writing process and does not cover the use of AI tools for analyzing and drawing conclusions from data as part of the research process.
Policy for Authors
Artificial Intelligence (AI) Policy
The journal adheres to the principles of academic integrity, transparency, and accountability in the use of automated tools and artificial intelligence (AI) technologies, including generative models (LLMs, including chatbots), in its research and publishing activities.
Policy for Authors
Policy for Authors
Authors are required to disclose the use of generative artificial intelligence during the preparation of the manuscript, except in cases of technical editing (proofreading, formatting, and stylistic editing).
Authors bear full responsibility for the authenticity, accuracy, and scientific correctness of the results obtained using automated tools.
Generative artificial intelligence cannot be listed as an author or co-author of a scientific publication.
Generative AI cannot be listed as a source of information in the list of references.
Policy for Reviewers and Editors
Reviewers and editors must not use generative artificial intelligence to produce reviews or editorial decisions, given the risks of confidentiality breaches, bias, superficial assessments, hidden prompts, or the generation of inaccurate information (including fictitious references).
The use of automated tools for editing or language refinement is permitted provided that such use is fully disclosed.
All automated processes used by the journal (including those for checking text similarity, detecting image manipulation, or the undeclared use of AI) are subject to mandatory human oversight (human-in-the-loop).
The journal ensures that an editor or designated staff member reviews the results of automated tools before making any editorial decisions.
The journal adheres to the principles of transparency, responsible use of technology, and academic ethics in the preparation, peer review, and publication of scholarly materials.