Content Generation and Proofing
Artificial intelligence offers powerful capabilities to assist in drafting, refining, and polishing written content across academic, clinical, and administrative contexts.
Artificial intelligence offers powerful capabilities to assist in drafting, refining, and polishing written content across academic, clinical, and administrative contexts. When employed thoughtfully, AI tools can enhance productivity, improve clarity, and support adherence to stylistic and ethical standards. However, responsible use requires maintaining appropriate human oversight, ensuring accuracy, protecting sensitive information, and preserving the integrity of original scholarship and authorship.
This guidance provides recommendations for the ethical and effective use of AI in content creation and proofreading. It provides best practices designed to uphold Emory’s commitment to academic excellence, transparency, and institutional values throughout the writing and editing process.
Best Practices
Before using AI tools in any capacity, individuals should proactively verify whether such use is permissible within their specific context. This involves consulting relevant authorities and reviewing all applicable policies, including but not limited to:
- Academic coursework: Students must review course syllabi, program handbooks, and faculty expectations regarding AI use. Many instructors specify whether tools such as ChatGPT, Grammarly, or other generative technologies are permitted for assignments and under what conditions.
- Research and publications: Investigators and authors preparing manuscripts for submission should review the AI authorship and content-generation policies of their target journals. Many leading journals (e.g., Nature, JAMA, NEJM) have issued formal statements outlining whether and how AI contributions must be disclosed or limited.
- Administrative and operational communications: Staff developing institutional documents or communications should consult with department leadership, legal counsel, communications and marketing guidelines, Office of Information Technology (OIT) for secure AI tool use and technical support to confirm permissible use of AI during drafting or editing.
- Health care and clinical settings: AI-generated content in clinical environments must comply with patient privacy laws (e.g., HIPAA) and Emory Healthcare guidelines. Users should refer to Emory’s health care-specific AI use guidance. Users should consult with Emory’s Office of General Counsel if they have any questions about using any AI tool that would interact with protected health information.
Content produced by AI tools should be reviewed prior to adoption in clinical, administrative or other organizational setting. Use only Emory-Approved Secure AI Tools. Users should:
- Cross-check all AI-generated edits for factual accuracy, coherence, and consistency.
- Confirm the output aligns with Emory’s academic or clinical standards, including voice, structure, and sensitivity to context.
- Retain the original version of your content to compare changes and reverse any inappropriate or unnecessary alterations.
- Disclose AI involvement where relevant or required (e.g., in co-authored papers or public reports).
Practices to Avoid
There may be several specific considerations for individual AI tools which have been vetted and approved for use by OIT or Emory Digital and listed in the EASAT registry. Please refer to EASAT and tool specific websites for more information.
Distinguishing AI Use: Many AI features are embedded in everyday tools, making it difficult to distinguish between "AI use" and routine writing assistance. If you are uncertain, treat any tool that suggests or rewrites content as potentially AI-enhanced and proceed with caution.
While disclosure of AI use is required in specific scenarios (e.g., journal articles or where legally mandated), it is generally not necessary to disclose AI use for internal drafting, brainstorming, or minor content refinement. However, the individual or the team is responsible for the accuracy and appropriateness of the final content.
Unit-Level Guidance:
- In general, higher-risk AI applications demand stricter validation processes and in case of uncertainty disclosure is encouraged
- Each Emory unit or department should define clear thresholds for disclosure to distinguish between substantive contributions and minor editing support and establish internal policies for review and transparency.
Educational Contexts: Students must follow faculty instructions regarding AI disclosure, recognizing that permissions may vary by course or assignment.