Why AI Ethics Matter
AI tools are powerful, but with great power comes great responsibility. Using AI ethically ensures:
- Academic Integrity: Your work remains honest and credible
- Fairness: You avoid perpetuating bias and discrimination
- Privacy: You protect sensitive information
- Accountability: You take responsibility for AI-generated content
- Trust: You maintain credibility with peers and institutions
AI Ethics Framework
Use this framework to evaluate whether AI use is appropriate:
1. Transparency
Question: Am I being honest about my use of AI?
- Disclose AI usage when required
- Document what the AI did
- Don't pass off AI work as your own
2. Accountability
Question: Am I responsible for this output?
- Review AI outputs carefully
- Verify facts and claims
- Fix errors before sharing
3. Fairness
Question: Could this AI use harm someone?
- Be aware of AI bias
- Consider impacts on marginalized groups
- Test for unintended consequences
4. Privacy
Question: Am I protecting sensitive data?
- Never share personal information
- Be cautious with proprietary data
- Follow FERPA and data privacy laws
Academic Integrity & AI
The Core Principle
AI should enhance your learning and work, not replace your thinking, effort, or understanding. When in doubt about whether AI use is appropriate, ask your instructor.
Acceptable Use Cases
✓ Generally Acceptable
- Brainstorming and outlining essays
- Understanding complex concepts
- Getting feedback on your draft (reviewing grammar, clarity)
- Learning programming concepts
- Researching topics and finding sources
- Creating visualizations and charts
- Writing boilerplate code in projects
- Practice problems and self-testing
Unacceptable Use Cases
✗ Generally Not Acceptable
- Submitting AI-generated essays as your own work
- Using AI to write your entire assignment
- Not disclosing when AI was used (when required)
- Copying code without understanding it
- Having AI do your thinking for you
- Using AI to bypass learning objectives
Gray Areas - What to Do
- Does my assignment guidelines address AI use?
- Would my instructor approve of this approach?
- Am I using this to learn or to avoid learning?
- Could this be considered plagiarism?
Sample Policy Language
Here's how to disclose AI use when required:
Bias & Fairness in AI
Understanding Bias
AI systems can reflect biases from their training data. Common types include:
Gender Bias
AI may associate certain roles with specific genders based on training data patterns.
Racial Bias
Historical biases in data can lead to discriminatory outputs.
Age Bias
Assumptions about age groups may be embedded in AI systems.
Ability Bias
AI may make assumptions about people with disabilities.
What You Can Do
- Be Aware: Recognize that AI can be biased
- Test for Bias: Try the same prompt with different demographics and compare results
- Diversify Input: Use diverse sources when researching
- Question Results: If an AI output seems stereotypical or unfair, investigate
- Report Issues: Let AI creators know when you find biased outputs
Real-World Example
Bias Test: Ask for the same email from an engineer named "Sarah" or "Priya" to see if the output changes.
Your Responsibility: Edit the output to use neutral pronouns or correct assumptions before using it.
Privacy & Security Considerations
Never Share
- Student names, IDs, or personal information
- Grades, test scores, or performance data
- Medical information or health records
- Social Security numbers or financial data
- Confidential research or proprietary information
- Passwords or security credentials
Data Practices to Follow
- De-identify data: Remove personal identifiers before analyzing
- Check policies: Know your institution's rules on AI and data
- Use institutional versions: If available, use your school's enterprise AI tools (more privacy-protected)
- Read terms: Understand how AI companies use your data
- Secure conversations: Delete sensitive conversations if you've shared them
FERPA Compliance (For Educators)
- Don't use public AI tools to analyze student data
- Anonymize before sharing with AI systems
- Use education-focused AI tools that comply with FERPA
- Document your AI use for audit trails
AI Hallucinations & Misinformation
What is an "Hallucination"?
An AI hallucination is when an AI system generates false, made-up, or misleading information that sounds plausible but isn't true.
Examples
How to Avoid Hallucinations
- Always verify important facts with reliable sources
- Cross-check citations before using them
- Use AI for brainstorming, not as your sole research source
- Be especially careful with statistics and specific claims
- Ask AI to provide sources or cite references
- Use the "Precise" mode in Copilot Chat for factual questions
Environmental Impact of AI
AI systems consume significant computational resources. Consider the environmental impact of your AI use:
How You Can Help
- Use Efficiently: Craft clear prompts to minimize back-and-forth
- Know Limitations: Don't use AI for tasks where simpler tools suffice
- Support Sustainability: Choose providers committed to renewable energy
- Advocate: Support organizations working on efficient AI systems
Ethical Decision Tree
Use this flowchart to decide whether AI use is appropriate:
-
Does my assignment or context allow/require disclosing AI use?
If NO: Don't use AI (or check with your instructor)If YES: Continue to step 2
-
Am I using AI to enhance my learning or to replace my thinking?
If REPLACE: Don't use AI this wayIf ENHANCE: Continue to step 3
-
Will I review, verify, and edit the AI output?
If NO: Don't use AIIf YES: Continue to step 4
-
Does my output contain any private or sensitive information?
If YES: Don't share that information with AIIf NO: Proceed with AI use
Resources for Further Learning
- Partnership on AI: Researching the ethical implications of AI
- AI Now Institute: Researching social implications of AI
- Mozilla Internet Health: Privacy and AI resources
- Your Institution's AI Policy: Check your school's specific guidelines