Navigating GenAI: Implications for Teaching and Learning

Welcome to the "Navigating GenAI: Implications for Teaching and Learning" Learning Path! This path offers a flexible, step-by-step guide to help you navigate Generative AI in higher education. Whether you're figuring out what GenAI tools do, drafting a course policy, or rethinking how students demonstrate learning, this path supports thoughtful, practical decision-making.

The five steps below are designed to meet you where you are, offering campus-specific guidance, curated resources from experts and across institutions, and space to reflect. Start at any point and return as your questions or teaching needs evolve.

Your guide for engaging with this Learning Path:

In this Learning Path, we use a variety of engagement methods to ensure flexibility and cater to the different ways you absorb and process information. Each component is marked by specific terms to guide you:

  • Attend: Join live workshops where real-time interaction is key.

  • Read: Access and reflect on text-based resources at your own pace.

  • Explore: Delve into curated resources to discover new ideas and strategies.

  • Watch: Engage with video content to visualize and understand concepts.

  • Practice: Work through activities to apply what you’ve learned.

Step 1: Explore GenAI Basics and Campus Resources & Policies

Start by building essential knowledge and understanding the institutional landscape.

  • Familiarize yourself with available campus AI tools.
    • ExploreLicensed Generative AI Tools, Berkeley AI Hub
      This page lists AI tools that are approved for campus use, highlighting the data protection level (P1/P2/P3), and any usage restrictions.
  • Review campus, departmental, and program GenAI policies.
    • Read: Understanding GenAI and Campus Expectations, UC Berkeley Center for Teaching and Learning
      This page highlights Berkeley’s 2025 Faculty Senate Working Group guidance: individual faculty are empowered to decide on incorporating or prohibiting GenAI in their courses, with arguments presented for both approaches to help inform course policies.
    • ReadAppropriate Use of Generative AI Tools, UC Berkeley Office of Ethics, Risks, and Compliance Services
      This advisory outlines responsible use of generative AI when working with different types of university data. It specifies which data protection levels (P1–P4) are allowed and spells out prohibited uses (like inputting student records, proprietary research, or using AI to complete academic work without instructor approval).
    • ReadUCB Guidelines for the Use of Generative AI and Permissible Use Cases, UC Berkeley People & Culture
      These guidelines address the use of AI in the operations of UC Berkeley (the “University”.)
    • Explore: Example of Departmental AI Policy: Berkeley Law Generative AI Policy
  • Understand UC Berkeley's academic integrity policy.
    • ExploreAcademic Integrity and Support, UC Berkeley International Office
      This page outlines examples of plagiarism and academic dishonesty–including GenAI misuse–and offers strategies and resources to avoid violations.
    • ExploreAcademic Misconduct, UC Berkeley Center for Student Conduct
      This resource defines the types of academic misconduct recognized at Berkeley and explains students’ responsibilities under the Code of Student Conduct.
    • ExploreGraduate Student Academic Integrity, UC Berkeley Graduate Division
      This page explains Berkeley’s expectations for graduate students regarding scholarly research and writing, including responsible attribution, avoidance of plagiarism, and compliance with academic standards.

Step 2: Craft Your GenAI Course Policy and Set Clear Expectations

No matter how you plan to use (or not use) GenAI in your course, your students need clarity. Turn foundational knowledge into clear course guidelines.

  • Draft or adapt your course GenAI and academic integrity statement, and add your policy to the syllabus or course site.

    • ReadShould You Add an AI Policy to Your Syllabus?, The Chronicle of Higher Education
      This article discusses the importance of establishing clear guidelines for the use of AI tools like ChatGPT in academic settings. Gannon emphasizes that, given the increasing prevalence of AI in education, instructors should thoughtfully consider how these tools align with their course objectives and academic integrity standards. He suggests that a well-crafted AI policy can help students understand acceptable uses of AI, promote ethical engagement with technology, and maintain the integrity of the learning process.
    • Read: Communicating Course Policies and Talking with StudentsUC Berkeley Center for Teaching and Learning
      This resource helps instructors communicate clear, constructive course policies on generative AI. It outlines how to set expectations, foster transparency and academic integrity, and engage students in thoughtful dialogue about GenAI use. The page provides sample syllabus language for a range of approaches, from full integration to complete prohibition.
    • ReadCreating your course policy on AI, Stanford Teaching Commons 
      This guide encourages reflection on the purpose of an AI policy, offers example statements, and suggests including specifics such as applicable tools, conditions of use, consequences for non-compliance, and the rationale behind the policy. The guide emphasizes transparency and student support, aiming to align AI policies with course objectives and uphold academic integrity.
    • ExploreExample AI course policy statements, created by Lance Eaton
    • PracticeWorksheet for Creating Your AI Syllabus Statement (PDF file), Center for Teaching and Learning, Stanford.
      Draft your policy statement using this worksheet.
  • Explore students’ perspectives on GenAI to understand their experiences, concerns, and hopes.
    • ReadStudent perspectives on creative pedagogy: Considerations for the Age of AISabrina Habib, Thomas Vogel, Evelyn Thorne 
      This article explores the impact of creative pedagogy on students in higher education, emphasizing the importance of creativity as a teachable skill in the age of AI. It highlights three themes: building confidence, mastering creative processes, and applying creativity practically. The findings suggest that integrating creativity-focused pedagogy prepares students for complex challenges in an AI-driven world.
    • ReadStudent Perspectives on the Use of Generative Artificial Intelligence Technologies in Higher Education, International Journal for Educational Integrity
      This article explores student perspectives on GenAI technologies, focusing on their use, attitudes, and implications for academic integrity policies. Conducted at the University of Liverpool, the study found that most students are aware of GenAI tools like ChatGPT, with mixed opinions on their academic use. While supportive of GenAI for grammar and minor assistance, students generally disapprove of its use for complete assignments. The findings emphasize the need for clear, inclusive university policies that balance educational integrity with equitable access to emerging technologies.
    • WatchStudents’ Perspectives on Using AI, Educause
  • Discuss expectations and guidelines with your students in class.
    • ReadHolding That Difficult AI Conversation With LearnersKaren Ferreira-Meyers
      This resource guides educators in facilitating nuanced classroom discussions about GenAI, helping instructors address student fears, correct misconceptions, and promote critical technological literacy. It offers a dialogic framework centered on transparency, ethical reasoning, and collaborative inquiry so learners can move from seeing AI as either magical or threatening to engaging with it thoughtfully.

Step 3: Understand AI Detection Tools and their Limits

Learn what AI detection tools are, why they often deliver false results, and how misuse can create equity and trust issues.

  • ExploreAI Detection Tools, Sacred Heart University
    This resource offers a list of AI detection tools, such as AI Writing Check, GPTZero, and TurnItIn's AI Writing Detection Tool, along with a collection of articles on AI detection in academic settings. It provides links to both tools and articles, making it a comprehensive resource for exploring AI detection options and insights.
  • Read: Availability of Turnitin Artificial Intelligence Detection for UC Berkeley, RTL
    These pages offer an overview of Turnitin's AI writing detection feature, released on April 4, 2023, designed to identify text generated by AI tools like ChatGPT. UC Berkeley has currently opted out of enabling this feature for instructors, pending a thorough review to assess its privacy, security, accessibility, and overall functionality. 
  • ReadGPT Detectors Are Biased Against Non-native English Writers, Weixin Liang, et. al.
    This article examines the bias inherent in AI-generated text detectors, particularly against non-native English speakers. The authors found that these tools frequently misclassify non-native writing as AI-generated due to linguistic simplicity, while often failing to identify genuine AI content. The study highlights the risks of unfair discrimination in academic and evaluative settings and advocates for developing more inclusive and robust detection methods. Recommendations include enhancing linguistic diversity in non-native samples and reevaluating the reliance on measures like text perplexity, which disproportionately impact non-native speakers.
  • ReadPost-Apocalyptic Education, Ethan Mollik 
    This resource offers an in-depth analysis of the challenges AI poses to traditional educational methods, particularly concerning homework and assessments. It examines the widespread use of AI among students, the difficulties in detecting AI-generated work, and the potential negative impacts on genuine learning. The article advocates for reimagining educational practices to integrate AI effectively, emphasizing the importance of fostering critical thinking and authentic engagement in the learning process.
  • ReadHow Can Educators Respond to Students Presenting AI-generated Content as their Own?, Open AI
    This resource from Open AI offers guidance for educators on addressing instances where students submit AI-generated content as their own. It discusses the limitations of AI detection tools, recommends encouraging students to share their AI interactions to promote accountability, and emphasizes the importance of fostering AI literacy among students.

Step 4: Revisit Assignments and Assessments

As GenAI tools evolve, it’s worth reconsidering how students demonstrate learning, and you might need to update coursework to reflect new opportunities and challenges brought by GenAI.

  • Read: A Framework for Designing Assignments in the Age of AI, Harvard College Writing Program
    This guide offers strategies for designing assignments that align with your learning goals while addressing GenAI use. It includes guiding questions, assignment redesign tips, and practical classroom ideas to foster critical thinking, originality, and meaningful student engagement in the AI era.
  • ReadUsing AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts, Ethan R. Mollick & Lilach Mollick
    This paper explores how GenAI tools, like ChatGPT, can support evidence-based teaching strategies to enhance learning. It highlights five strategies: providing varied examples and explanations, addressing misconceptions, employing frequent low-stakes testing, assessing student learning, and practicing distributed learning. The paper argues that AI can help educators implement these strategies efficiently, serving as a "force multiplier," while cautioning that instructor oversight is essential to ensure accuracy and relevance in AI-generated materials.
  • ExploreAI Resistant Assignments, Washington University
    This guide offers strategies for designing coursework that isn’t easily completed by GenAI. It encourages instructors to think through how AI might respond to assignments and thoughtfully structure tasks that encourage original student thinking.

Step 5: Dig Deeper: Understand AI's Ethical Complexities

Ready for more? Here’s the space to challenge, reflect, and connect with what’s tricky about GenAI.

  • Explore"AI & Ethics" slide deck, Torrey Trust
    This presentation examines the intersection of artificial intelligence (AI) and ethics, focusing on the implications of AI in educational contexts. It discusses ethical considerations, potential biases in AI systems, and the responsibilities of educators in integrating AI technologies into teaching and learning. The presentation aims to inform educators about the ethical challenges associated with AI and to promote thoughtful and responsible use of AI tools in education.
  • WatchHow AI Can Be Biased & What Can We Do About it?, UC Berkeley People & Culture
  • ReadWhen AI Gets It Wrong: Addressing AI Hallucinations and Bias, MIT Teaching & Learning Technologies.
    This resource examines the challenges of generative AI systems producing biased or inaccurate content. It highlights issues such as AI-generated images and text that perpetuate gender and racial stereotypes, as well as instances where AI tools fabricate data—a phenomenon known as "hallucination." The article attributes these problems to the nature of AI training data, which often includes societal biases and inaccuracies, and to the design of AI models that prioritize generating plausible content over verifying truthfulness. To mitigate these issues, the article recommends critically evaluating AI outputs, understanding the limitations of AI technologies, and fostering open discussions about the ethical implications of AI in educational settings.
  • ReadGenerative Artificial Intelligence and Copyright Law, Congressional Research Service
    This document examines how generative AI technologies challenge traditional copyright principles. It explores issues of authorship, copyrightability of AI-generated works, and potential infringement in the training and output of generative AI systems. The report highlights ongoing legal cases, decisions, and debates surrounding whether AI outputs can be copyrighted, who owns such copyrights, and how AI training processes may or may not constitute fair use. It also considers legislative and judicial approaches to addressing these unresolved questions in the context of evolving AI capabilities.
  • ReadMeasuring the Environmental Impacts of Artificial Intelligence Compute and Applications, Organization for Economic Cooperation and Development.
    This article explores the sustainability challenges and opportunities associated with AI technology. It distinguishes between the direct environmental impacts of AI systems, such as energy consumption and greenhouse gas emissions during production, operation, and disposal, and indirect impacts, including both positive and negative effects of AI applications. The report emphasizes the need for standardized measurement frameworks, improved data collection, and policy efforts to mitigate environmental damage while leveraging AI to address sustainability challenges. 

Attend Live Sessions - Fall 2025

  • Approaches to Addressing Student AI Engagement this FallAugust 11, 1:00 p.m.
    Join this session to explore strategies for communicating your expectations around generative AI use in your course. We will discuss common concerns, explore different approaches, and leave with a plan for shaping the classroom culture you want this fall.
  • Teaching Peer Review: Helping Students Recognize Their Value in the GenAI Era, August 13, 1:00 p.m.
    Join this session to access a ready-to-use bCourses module that teaches students how to give and receive peer feedback. Learn how peer review can build community, reduce GenAI reliance, foster collaboration, and help students develop essential skills like critical thinking, communication, and self-assessment.
  • How AI Literacy Can and Will Impact Academic Integrity, August 19, 1:00 p.m.
    Join this session to explore how fostering AI literacy and clearly communicating the purpose of assignments can reduce academic integrity concerns. Learn strategies to engage students by connecting assessments to their goals and experiences, while promoting critical understanding of generative AI’s workings and impact.