The guidelines, which were developed in collaboration with researchers and related institutions, allow the use of AI tools in all stages of research - from design and development of the proposal to implementation, data analysis, report writing, and even the refereeing process - provided that ethical standards and principles of transparency are observed.
According to this document, researchers are required to fully disclose the type and extent of use of AI tools in proposals and final reports. This disclosure must include the name of the tool, software version, date of use, and an explanation of its role in different parts of the research. For example, if AI is used to generate text or analyze data, it is mandatory to cite the source, review method, and explain validation.
One of the key points of the guidelines is the emphasis on researcher accountability. According to this document, researchers cannot delegate responsibility for the scientific accuracy of content or results to AI and will still be accountable for the quality and scientific validity of the work. In other words, AI is only an auxiliary tool and does not replace human scientific judgment.
This guideline, which was developed in collaboration with researchers and related institutions, considers the use of artificial intelligence tools to be permissible in all stages of research, from design and formulation of the proposal to implementation of the plan, data analysis, report writing, and even the refereeing process, provided that ethical standards and principles of transparency are observed.
In the section related to refereeing and supervision, the responsibilities of supervisors and referees are also defined. Accordingly, they must be careful when reviewing proposals and reports to ensure that the use of artificial intelligence is correctly stated and does not violate the principles of research ethics. In case of ambiguity, referees are allowed to ask the researcher for an explanation or a reliable source. It is also recommended that referees not be biased towards the use or non-use of artificial intelligence and only make judgments based on scientific and ethical validity.
This guideline was developed with inspiration from international experiences and valid ethical documents such as the European Union frameworks and the Global Research Committee (GRC). At the same time, all of its content has been reviewed according to the country's local needs and under the supervision of the National Science Foundation of Iran.
The National Science Foundation of Iran has stated that the purpose of approving this document is to "responsibly use the capabilities of artificial intelligence to promote research," and emphasizes that its careful implementation can prevent problems such as plagiarism, privacy violations, and the dissemination of inaccurate data.