Effective Date: January 2026
Purpose and Scope
This policy establishes guidelines for the responsible use of artificial intelligence (AI) tools in all laboratory research activities, including data analysis, manuscript preparation, grant writing, and peer review. All laboratory members—faculty, postdoctoral researchers, graduate students, undergraduate students, and staff—must adhere to these standards to maintain scientific integrity, comply with federal regulations, and uphold ethical research practices.1,2,3
Core Principles
Laboratory AI use is guided by three foundational principles:
- Transparency: Full disclosure of AI applications in research outputs.
- Accountability: Humans remain solely responsible for the accuracy and integrity of all AI-generated outputs.
- Human Oversight: AI tools must be used to augment, not replace, critical thinking and scientific judgment.1,2,4
Permitted Uses
AI tools may be used for the following purposes with appropriate verification and disclosure:
- Bioinformatics Workflows: Including sequence quality control, taxonomic classification assistance, and metagenomic binning support.5,6,7
- Literature Review: Assisting in summarizing vast literature and hypothesis generation (must be verified against primary sources).
- Data Visualization & Statistics: Generating code for plots or suggesting statistical approaches.
- Writing Assistance: Improving manuscript language, grammar, flow, and readability.
- Coding Assistance: Debugging scripts and generating analysis pipelines.
- Grant Drafting: Generating preliminary text for grant applications, strictly adhering to sponsor guidelines.1,2,3
All uses must be documented with tool names, versions, dates, and specific applications.3,4
Prohibited Uses
The following applications are strictly forbidden:
- Unattributed Content: Submitting AI-generated content as original work without substantial human contribution and disclosure.1,2,8
- Peer Review Violation: Using AI tools to conduct peer review of manuscripts or grant applications. Uploading confidential manuscripts to AI platforms violates confidentiality and funding agency regulations.9,10,11
- Data Privacy Violations: Inputting confidential, proprietary, or unpublished research data into public AI platforms (e.g., standard ChatGPT, Claude) without approved data protection measures.1,2,12
- Sensitive Data Exposure: Entering human subjects data, protected health information (HIPAA), student records (FERPA), or export-controlled information into unapproved AI systems.1,2,3
- Code Security Risks: Pasting API keys, passwords, or server credentials into AI chatbots for debugging purposes.
- AI Authorship: Listing AI tools as authors or co-authors on publications.4,8,13,14
- Fabrication: Using AI to generate references, citations, or synthetic primary data (e.g., fabricating microscope images or sequencing reads).8,13
Data Protection and Privacy
Researchers must classify data according to UF’s Data Classification Policy before using AI tools.1,3
- Public AI Platforms: (e.g., ChatGPT, Claude, Gemini) may only process publicly available or fully de-identified data.2,12
- Restricted Data: Sensitive research data—including unpublished sequences, microbiome datasets with human subjects metadata, proprietary collaborator data, and preliminary results—require the use of institutionally approved, secure AI environments (e.g., UF HiPerGator AI protected instances) or must not be processed through AI systems.1,2,3,12
When in doubt, consult with laboratory leadership before entering data into any AI tool.
Disclosure and Attribution
All AI use in research outputs requires transparent disclosure.1,4,8,13
- Manuscripts: Describe AI applications in the Methods section for data analysis or research design. Acknowledge AI assistance for writing in the Acknowledgments section, specifying the tool name, version, purpose, and extent of use.4,8,13,14,15
- Grant Applications: Follow sponsor-specific guidelines. NIH permits limited AI use but prohibits substantially AI-generated applications and requires human accountability for all content.9,10,11
Authors remain fully responsible for accuracy, originality, and integrity of all work regardless of AI involvement.8,13,15
Quality Control and Validation
All AI-generated outputs must undergo rigorous human verification.1,4,17 Researchers must:
- Validate all data interpretations, statistical results, and biological conclusions.
- Check for algorithmic bias, particularly in taxonomic classifications and functional predictions.5,7
- Verify all citations and factual claims, as AI systems frequently generate plausible but incorrect information (“hallucinations”).
- Ensure reproducibility by documenting AI tool parameters, prompts, and version information.1,3,4
Training and Education
New laboratory members will receive onboarding on this policy and discipline-specific AI best practices.3,18,19 The laboratory will maintain awareness of evolving AI technologies, institutional guidelines, and journal policies through regular discussions and policy updates.3,20,21 Principal investigators and senior researchers are responsible for mentoring trainees on appropriate AI use aligned with scientific integrity standards.22
Compliance and Review
This policy aligns with UF’s AI research guidance,1,3 NIH requirements for federally funded research,9,10,11 and major journal publisher policies in ecology and microbiology.8,13,14,23 Laboratory members who violate this policy may face consequences under UF’s Research Misconduct Policy.1 This policy will be reviewed annually.
Questions and Guidance
Laboratory members with questions about appropriate AI use should consult with the Principal Investigator before proceeding. For complex cases involving sensitive data, IRB protocols, or unclear applications, consultation with UF’s AI working group or Office of Research is recommended.24,25
Policy Acknowledgment
Please print and sign if hard copy is required, or confirm via email.
Principal Investigator Signature: _____________
**Date:** ________
Member Acknowledgment: All laboratory members must read and acknowledge understanding of this policy upon joining the laboratory and annually thereafter.
References
- University of Florida. Guidance for Researchers - AI.
- Northeastern University. Standards for the Use of Artificial Intelligence in Research. 2023.
- University of Florida. Best Practices for Generative AI (PDF).
- Messerli et al. Ten simple rules for optimal and careful use of generative AI in science. PLOS Computational Biology. 2025.
- Li et al. AI-empowered human microbiome research. Gut. 2025.
- Wang et al. Artificial Intelligence for Microbiology and Microbiome Research. arXiv. 2024.
- Chen et al. Integrating Artificial Intelligence in Next-Generation Sequencing. PMC. 2025.
- Ecological Society of America. Artificial Intelligence (AI) Policy – Publications.
- NIH Office of Extramural Research. The Use of Generative Artificial Intelligence in the NIH Peer Review Process (NOT-OD-23-149). 2023.
- CITI Program. NIH Clarifies Prohibition on the Use of AI Tools in Peer Review Processes. 2024.
- NIH Extramural Nexus. Apply Responsibly: Policy on AI Use in NIH Research Applications. 2025.
- Harvard University Information Technology. Generative AI Guidelines.
- Council of Science Editors. CSE Guidance on Machine Learning and Artificial Intelligence Tools. 2023.
- ESA Journals - Wiley. Ecology Author Guidelines.
- Elsevier. The use of AI and AI-assisted technologies in writing for Elsevier. 2023.
- National Institutes of Health. Grants Policy Statement: Research Integrity. 2025.
- University of Illinois. Best Practices in Using Generative AI in Research. 2024.
- University of Georgia Research. Guidance on AI Use in Research. 2023.
- University of Washington Graduate School. Effective and Responsible Use of AI in Research. 2024.
- University of Florida Libraries. AI @ UF - Artificial Intelligence Guides. 2025.
- Ecological Society of America. Media Tip Sheet: Artificial Intelligence at #ESA2025. 2025.
- Ecological Society of America. Code of Ethics.
- MIT Information Systems & Technology. Guidance for use of Generative AI tools. 2025.
- University of Florida. Working Group in AI Ethics and Policy.
- University of Florida. Data Classification Policy.