Welcome to the "Navigating Gen AI: Implications for Teaching and Learning" Learning Path! This Learning Path is your all-access pass to a range of sessions and materials designed to explore the role of Generative AI in higher education classrooms. Through a series of discussions, readings, and collaborative sessions, we will examine the impact of these tools on our educational practices.
The components are arranged in a logical order to help you progressively build your understanding of Generative AI's implications. However, there’s no requirement to follow a set order or engage with every resource. This path is designed to meet you where you are, offering opportunities to explore, revisit key concepts, and expand your understanding of Generative AI whenever it fits your schedule. Let's explore together how Gen AI is redefining teaching and learning, and what it means for you and your students.
Your guide for engaging with this Learning Path:
In this Learning Path, we use a variety of engagement methods to ensure flexibility and cater to the different ways you absorb and process information. Each component is marked by specific terms to guide you:
-
Attend: Join live workshops where real-time interaction is key.
-
Read: Access and reflect on text-based resources at your own pace.
-
Explore: Delve into curated resources to discover new ideas and strategies.
-
Engage: Participate in extended courses designed for deeper dives into topics.
-
Practice: Work through activities to apply what you’ve learned.
-
Watch: Engage with video content to visualize and understand concepts.
Getting Started with Gen AI
- Explore: Visit the AI Pedagogy Project (AIPP), developed by the metaLAB at Harvard, for an introductory guide to AI tools, an LLM Tutorial, additional AI resources, and curated assignments to use in your own classroom. The metaLAB has also published a quick start guide for Getting Started with ChatGPT
Relevant UC Berkeley Resources
- Explore: AI in Teaching & Learning, RTL
This page discusses the benefits and challenges of integrating GenAI into the classroom, emphasizing the importance of ethical considerations and adherence to the UC Responsible AI Guidelines.
- Explore: Understanding AI Writing Tools and Their Uses for Teaching and Learning at UC Berkeley, CTL
This page offers guidance for instructors on establishing clear policies regarding GenAI use, revising learning outcomes, updating course materials, and incorporating research lessons to help students verify AI-generated information.
- Read: UCB Guidelines for the Use of Generative AI and Permisible Use Cases, People & Culture
These guidelines address the use of AI in the operations of UC Berkeley (the “University”.)
- Explore: AI at UC Berkeley, Berkeley IT
This page offers an overview of policies, governance structures, and resources related to AI usage on campus. It includes guidelines on the appropriate use of generative AI tools, legal advisories from the University of California's Office of General Counsel, and information on AI applications in teaching and learning. Additionally, the page provides details about licensed AI tools available to the UC Berkeley community and outlines draft guidelines for AI use developed by the Office of People & Culture.
- Engage: AI Essentials at UC Berkeley, UC Learning Center
This course covers foundational Artificial Intelligence concepts, UC policies regarding the usage of AI tools, and opportunities for application in Higher Education. It is designed to foster a shared understanding of Artificial Intelligence across the campus community, recognizing the diverse levels of familiarity participants may have with the topic. Whether participants are AI experts or are simply exploring the subject for the first time, the course aims to provide new and practical insights for everyone.
- Explore:UC Legal Alert on AI
- Join: UC Berkeley AI community
Crafting Your AI Course Policy
- Read: Should You Add an AI Policy to Your Syllabus?, The Chronicle of Higher Education
This article discusses the importance of establishing clear guidelines for the use of AI tools like ChatGPT in academic settings. Gannon emphasizes that, given the increasing prevalence of AI in education, instructors should thoughtfully consider how these tools align with their course objectives and academic integrity standards. He suggests that a well-crafted AI policy can help students understand acceptable uses of AI, promote ethical engagement with technology, and maintain the integrity of the learning process.
- Read: Creating your course policy on AI, Stanford Teaching Commons
This guide encourages reflection on the purpose of an AI policy, offers example statements, and suggests including specifics such as applicable tools, conditions of use, consequences for non-compliance, and the rationale behind the policy. The guide emphasizes transparency and student support, aiming to align AI policies with course objectives and uphold academic integrity.
- Explore: Example AI course policy statements, created by Lance Eaton
- Practice: Worksheet for Creating Your AI Syllabus Statement, Center for Teaching and Learning, Stanford
Draft your policy statement using this worksheet.
Gen AI in Teaching
- Attend: Navigating AI in Course Design: Early Steps and Reflections | Jan 13
Join this workshop to explore how an instructional designer used AI for course design, including creating images, podcasts, and course introductions. See the tools, results, and tips for getting started with AI.
View details and register for this workshop.
- Attend: Understanding and Responding to Generative AI in Teaching and Learning | Jan 14
Join this workshop to explore practical approaches to engaging with generative AI in your class. You will gain a foundational overview of the AI landscape, discuss opportunities and concerns, and learn strategies for addressing academic integrity while fostering open conversations with students about AI’s impact on learning.
View details and register for this workshop.
- Attend: Enhancing Classroom Experiences with Generative AI: A Workshop for Personalized Learning with Open Source Tools | Apr 1
Join this session to explore case studies on using Gen AI in STEM classrooms through the Open Adaptive Tutor (OATutor). Learn about automated curriculum alignment, adapting open educational resources, and using collaborative authoring tools to refine educational materials with AI.
AI in Education: The Student View
- Attend: AI Unfiltered: Student Voices | Feb 6
Join this session for a moderated student panel exploring how Berkeley undergrad and graduate students are using generative AI. Hear their thoughts on course policies, academic integrity, classroom use, and their questions and concerns about AI tools.
View details and register for this workshop.
- Attend: Analysis of Student Pulse Survey on AI and Faculty Views | Feb 26
Join this session to hear from UC Berkeley and UC Davis researchers on how faculty narratives about generative AI may differ from student experiences. Discover how data can help align these perspectives and inform decisions about AI policies and learning opportunities.
View details and register for this workshop.
- Read: Student perspectives on creative pedagogy: Considerations for the Age of AI, Sabrina Habib, Thomas Vogel, Evelyn Thorne
This article explores the impact of creative pedagogy on students in higher education, emphasizing the importance of creativity as a teachable skill in the age of AI. It highlights three themes: building confidence, mastering creative processes, and applying creativity practically. The findings suggest that integrating creativity-focused pedagogy prepares students for complex challenges in an AI-driven world.
- Read: Student Perspectives on the Use of Generative Artificial Intelligence Technologies in Higher Education, International Journal for Educational Integrity
This article explores student perspectives on Gen AI technologies, focusing on their use, attitudes, and implications for academic integrity policies. Conducted at the University of Liverpool, the study found that most students are aware of Gen AI tools like ChatGPT, with mixed opinions on their academic use. While supportive of Gen AI for grammar and minor assistance, students generally disapprove of its use for complete assignments. The findings emphasize the need for clear, inclusive university policies that balance educational integrity with equitable access to emerging technologies.
- Watch: Students’ Perspectives on Using AI, Educause
Ethical Considerations and Academic Integrity with Gen AI
- Attend: How AI Literacy Can and Will Impact Academic Integrity | Feb 4
Join this session to explore how fostering AI literacy and clearly communicating the purpose of assignments can reduce academic integrity concerns. Learn strategies to engage students by connecting assessments to their goals and experiences, while promoting critical understanding of generative AI’s workings and impact.
View details and register for this workshop.
- Read: Initial Thoughts and Suggestions Regarding Implications of Large Language Models (LLMs) (e.g. ChatGPT) for Fall Final Assessments, UC Berkeley Academic Senate
This resource offers initial observations and suggestions from UC Berkeley's Academic Senate Committee on Teaching (COT) and the Center for Teaching and Learning (CTL) regarding the implications of Large Language Models (LLMs), such as ChatGPT, for final assessments. It addresses concerns about academic integrity and the accurate evaluation of student learning, providing guidance for incorporating generative AI tools into assessment planning.
- Explore: "AI & Ethics" slide deck, Torrey Trust, and Teaching AI Ethics, Leon Furze.
This presentation examines the intersection of artificial intelligence (AI) and ethics, focusing on the implications of AI in educational contexts. It discusses ethical considerations, potential biases in AI systems, and the responsibilities of educators in integrating AI technologies into teaching and learning. The presentation aims to inform educators about the ethical challenges associated with AI and to promote thoughtful and responsible use of AI tools in education.
- Watch: How AI Can Be Biased & What Can We Do About it?, UC Berkeley People & Culture
- Read: Generative Artificial Intelligence and Copyright Law, Congressional Research Service
This document examines how generative AI technologies challenge traditional copyright principles. It explores issues of authorship, copyrightability of AI-generated works, and potential infringement in the training and output of generative AI systems. The report highlights ongoing legal cases, decisions, and debates surrounding whether AI outputs can be copyrighted, who owns such copyrights, and how AI training processes may or may not constitute fair use. It also considers legislative and judicial approaches to addressing these unresolved questions in the context of evolving AI capabilities.
- Read: When AI Gets It Wrong: Addressing AI Hallucinations and Bias, MIT Teaching & Learning Technologies.
This resource examines the challenges of generative AI systems producing biased or inaccurate content. It highlights issues such as AI-generated images and text that perpetuate gender and racial stereotypes, as well as instances where AI tools fabricate data—a phenomenon known as "hallucination." The article attributes these problems to the nature of AI training data, which often includes societal biases and inaccuracies, and to the design of AI models that prioritize generating plausible content over verifying truthfulness. To mitigate these issues, the article recommends critically evaluating AI outputs, understanding the limitations of AI technologies, and fostering open discussions about the ethical implications of AI in educational settings.
- Read: Measuring the Environmental Impacts of Artificial Intelligence Compute and Applications, Organization for Economic Cooperation and Development.
This article explores the sustainability challenges and opportunities associated with AI technology. It distinguishes between the direct environmental impacts of AI systems, such as energy consumption and greenhouse gas emissions during production, operation, and disposal, and indirect impacts, including both positive and negative effects of AI applications. The report emphasizes the need for standardized measurement frameworks, improved data collection, and policy efforts to mitigate environmental damage while leveraging AI to address sustainability challenges.
Educators' Responses to Gen AI Over the Past Year
- Attend: The AI-Engaged Classroom - Teaching & Learning Journal Club | Apr 9
This session examines how GenAI is influencing classroom assignments and assessments, drawing on insights fromTeaching with AI: A Practical Guide to a New Era of Human Learningby Bowen and Watson (2024). The impact of GenAI on student engagement, policies, and grading will be explored, along with strategies to design assignments that emphasize human effort and process-based learning. The importance of intrinsic motivation and classroom environment in guiding productive uses of GenAI will also be discussed.
View details and register for this workshop.
- Attend: Educating in the AI Age: Faculty Insights| Apr 10
Join this session to hear faculty from diverse disciplines share their experiences with generative AI in teaching. Gain valuable insights on its role and impact in higher education to help guide your approach to integrating AI into your courses.
View details and register for this workshop.
- Read: Assignment Makeovers in the AI Age: Essay Edition, Derek BruffIn
This resource Derek Bruff examines the necessity of redesigning essay assignments in response to the rise of AI text generation tools.
- Read: Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts, Ethan R. Mollick & Lilach Mollick
This paper explores how Gen AI tools, like ChatGPT, can support evidence-based teaching strategies to enhance learning. It highlights five strategies: providing varied examples and explanations, addressing misconceptions, employing frequent low-stakes testing, assessing student learning, and practicing distributed learning. The paper argues that AI can help educators implement these strategies efficiently, serving as a "force multiplier," while cautioning that instructor oversight is essential to ensure accuracy and relevance in AI-generated materials.
Using AI to Create Content
- Attend: Using the New iClicker AI Question Generator | Mar 11
This session explores iClicker’s AI Question Creator, a generative AI tool for crafting discipline-specific questions. Participants will learn to create questions designed to boost student engagement and discussions while integrating the tool into their teaching to save time and enhance classroom interactions.
AI Detection: Tools, Accuracy, and Best Practices
- Explore: AI Detection Tools, Sacred Heart University
This resource offers a list of AI detection tools, such as AI Writing Check, GPTZero, and TurnItIn's AI Writing Detection Tool, along with a collection of articles on AI detection in academic settings. It provides links to both tools and articles, making it a comprehensive resource for exploring AI detection options and insights.
- Read: Availability of Turnitin Artificial Intelligence Detection for UC Berkeley, RTL
This pages offers an overview of Turnitin's AI writing detection feature, released on April 4, 2023, designed to identify text generated by AI tools like ChatGPT. UC Berkeley has currently opted out of enabling this feature for instructors, pending a thorough review to assess its privacy, security, accessibility, and overall functionality. Instructors interested in participating in an opt-in pilot can contactacademicintegrity@berkeley.edu
- Read: The Creation of Bad Students: AI Detection for Non-Native English Speakers, Valeria Ramírez Castañeda, UC Berkeley D-Lab Data Science for Social Justice Fellow
This article explores the use of AI detection tools in academia, arguing that they perpetuate biases against non-native English speakers and reinforce a punitive, surveillance-based educational model.
- Read: GPT Detectors Are Biased Against Non-native English Writers, Weixin Liang, et. al.
This article examines the bias inherent in AI-generated text detectors, particularly against non-native English speakers. The authors found that these tools frequently misclassify non-native writing as AI-generated due to linguistic simplicity, while often failing to identify genuine AI content. The study highlights the risks of unfair discrimination in academic and evaluative settings and advocates for developing more inclusive and robust detection methods. Recommendations include enhancing linguistic diversity in non-native samples and reevaluating the reliance on measures like text perplexity, which disproportionately impact non-native speakers.
- Explore: Q&A on AI Text Detection, Center for Teaching & Learning, University of Alaska Fairbanks
This resource provides a Q&A format addressing common questions about AI text detection tools in education. It highlights that, as of March 2024, there are no reliable tools capable of consistently identifying AI-generated writing in student assignments. The resource references studies indicating that existing detection products often produce false positives, flagging human-written text as AI-generated. It also discusses the ethical implications of using such tools and suggests alternative methods for educators to assess the authenticity of student work.
- Read: Why AI Writing Detectors Don’t Work, Benji Edwards
This resource offers an analysis of why AI writing detectors often misclassify human-authored texts, such as the U.S. Constitution, as AI-generated. It delves into the limitations and inaccuracies of these detection tools, highlighting their propensity for false positives.
- Read: Post-Apocalyptic Education, Ethan Mollik
This resource offers an in-depth analysis of the challenges AI poses to traditional educational methods, particularly concerning homework and assessments. It examines the widespread use of AI among students, the difficulties in detecting AI-generated work, and the potential negative impacts on genuine learning. The article advocates for reimagining educational practices to integrate AI effectively, emphasizing the importance of fostering critical thinking and authentic engagement in the learning process.
This resource from Open AI offers guidance for educators on addressing instances where students submit AI-generated content as their own. It discusses the limitations of AI detection tools, recommends encouraging students to share their AI interactions to promote accountability, and emphasizes the importance of fostering AI literacy among students.