Learning Technologies
More Strategies for Avoiding AI
UVA Faculty Share Why and How They Keep AI Out of Their Courses
Wednesday, February 18, 2026
On February 13, 2026, the Center for Teaching Excellence hosted a second virtual conversation on the topic of "avoiding AI," that is, strategies for keeping AI (mostly) out of one's course. Building on a similar conversation held in November, this session featured three faculty panelists: MC Forelle and Joshua Earle, both assistant professors in engineering and society, and David Singerman, assistant professor of history and American studies. The panelists shared their reasons for keeping AI out of their courses, along with the teaching strategies they use to help students focus less on AI and more on learning.
The CTE's Derek Bruff and Adriana Streifer facilitated the conversation. The following recap was written by Derek based on notes taken by Adriana and the CTE's Jess Taggart.
What's your experience with generative AI? And what's your approach to AI in your teaching?
MC Forelle hasn't touched a single generative AI tool, and has no plans to do so; they are a conscientious objector. MC's area of research is the history and social impact of digital technologies, and they view most forms of AI as the worst kind of techno-capitalism. Their courses focus on critical thinking and research and writing skills, and there's evidence that use of generative AI actively undermines those skills. For those reasons, MC prohibits their students from engaging with AI. (MC also recommended the poem "For a Student Who Used AI to Write a Paper" by Joseph Fasano.)
Joshua Earle has landed in a similar place, prohibiting AI use by his students. He has played around with AI a little, but he has also studied the history of AI, which has roots in the eugenics movement and all the racism and ableism and sexism that comes with that movement. He doesn't see its use as helpful to learning. "Bringing AI to a course on critical thinking," he quipped, "is like bringing a forklift to the gym. The point isn't that the weights move, it's that you move the weights."
David Singerman takes a more exploratory approach to AI. He acknowledges the problems it has, while also seeing its applications. He quoted Thoreau on railroads: "We don't ride on the railroad, it rides on us." Railroads were this horrible, corrupt, capitalist leveling of American life, but, also, trains are cool. He sees similar complexities in generative AI. He teaches in the Engagements, so exploring this with his students is on topic for his courses.
Why do you want your students to avoid using AI (either partially or wholly) in your courses?
David elaborated on Joshua's forklift-in-the-gym analogy. Students using AI can mistake the outcome for the process, but the process is where the learning happens. His courses always include this learning objective: "By the end of this course, students will become more comfortable with difficulty, complexity, and ambiguity as opportunities to learn." (This objective got a lot of love in the text chat during the session!) Having a machine do the work for you is the opposite of that objective. The temptation to use AI as a shortcut is strong, and the tools themselves incentivize it.
Joshua added that even if some future of version of AI avoided the problems of today's AI by using ethically sourced data and environmentally friendly computing, it would still skip the process. And it outputs the "median social understanding of a topic," along with interpretations that are often mean to manipulate and obscure things. Our students should do better than that.
[Editorial sidebar: Earlier in the week in a different conversation about AI and teaching, I shared the forklift-in-the-gym analogy, too. I then asked, What if generative AI could instead be more like a treadmill? When it's raining, I don't like to run outside. The treadmill at the Y creates an opportunity for me to exercise. Might there be ways for us to use generative AI to create opportunities for our students for the practice and feedback that's needed for learning? Outsourcing an entire process to AI isn't helpful, but that's not the only way to use AI. But this conversation was about avoiding AI...]
MC pointed out that building capacity to handle complexity, to persevere through hard conversations ("staying with the trouble") isn't just an intellectual endeavor, it's necessary for a functioning society. "If we seek the easy way out of everything, then we are only going to tolerate social interactions that don't challenge us." But that doesn't help us live in common with people in a pluralistic society. MC wants their students to ask, Who is pushing this technology on them and for what purposes? Where are they getting the idea that they'll be left behind if they don't embrace AI? Who benefits from that?
David added that in the first part of this century, higher education didn't do a great job helping students develop the kind of information literacy that's needed to successfully navigate the web and social media. We have to do better this with time AI. [For more on this argument, see "What Past Education Technology Failures Can Teach Us about the Future of AI in Schools" by MIT's Justin Reich.]
What strategies do you use to help students focus less on AI and more on learning?
MC has found Hypothesis a useful tool for engaged students in collaborative annotation of course readings. This is for a course on engineering epistemology, not an easy topic for more engineering students. MC assigns students to persistent groups, and each group has a space to annotate readings together. If a student is struggling with the reading, instead of turning to AI for help, they can see what sense their peers are making of the reading. Knowing that students need some guidance in the work of annotation, MC provides these prompts:
In Hypothesis, you will highlight or annotate:
- At least one quote that either represents a core argument of the reading, or that represents a fundamental point that you find particularly compelling.
- At least one quote that you find confusing. Include an annotation with the question it brings up.
- At least one idea that connects in some way to a previous reading or discussion. Include an annotation explaining this connection.
- At least one thing that you didn’t know – a word, event, person, idea, etc. – and add an annotation clarifying it using new info you looked up. This should not be longer than 3 sentences' worth of new info. Include a link to the (reputable) source where you found this info.
Respond to at least one question from a fellow reader. If you are the first to annotate the reading that week, submit an additional question.
MC has also moved away from a reliance on writing-based assignments, finding intentional, creative ways to engage students with ideas through arts and crafts and other embodied learning experiences. David also uses a form of embodied learning, in his case asking the students in his course on information and democracy to attend a local city council or commission meeting and write about the experience.
Joshua leans into the power of authentic audiences. His students' final projects are published and visible outside the course, and he points out to students that they are accountable for their work, not their AI assistant. He also uses specifications grading, an alternative grading practice where each assignment is graded as "pass" or "revise." Generally, if students do the work, they get a "pass," which lowers the stakes and the anxiety that students feel about grades. Joshua also uses a kind of oral exam, inviting each student to his office for a 30-minute in-person meeting where the student reads their paper, Joshua provides verbal feedback, and the two of them have a conversation about the student's work.
David, who is in his second year serving as a Faculty AI Guide, shared a takeaway from a conversation about AI he had with his colleagues in the history department. They reported hearing from students who didn't want to use AI in their course work, but felt that they had to use it to keep up with peers who were using AI. His colleagues shared various strategies they were using to create environments where students could trust that other students weren't using AI, like on-paper reading quizzes, in-class writing activities, and designating notetakers so students didn't need to be on their laptops during class. One colleague called this "unilateral disarmament."
In the chat, one attendee noted that asking students to abstain from AI use is a big ask, requiring impulse control and some emotional and ethical work. MC responded, "Absolutely! This is why I don't ask them to abstain without designing assignments that encourage abstention or make it easy."
There were a couple of nods to potential positive impacts of generative AI on learning. David noted that when he invited guests to come to class to talk about local organizing around housing issues, he had students use AI to get up to speed on those issues, and that seemed to lead to better questions from the students. And Joshua called out a comment in the text chat about AI as assistive technology, saying that AI will both help and harm students. The CTE's Luke Rosenberg seconded this, noting that AI has been an ongoing topic of conversation for the CTE's Student Advisory Board on Accessibility in Learning. The students with disabilities that Luke has talked to are "acutely aware of this tension and are making very careful and considered choices."
The panelists also acknowledged that situational factors are important to decisions around AI. Joshua argued that while specifications grading can work in courses with more than 100 students, one-on-one writing feedback sessions would not. And David noted that the panelists were all teaching humanistic courses not tightly coupled to long lists of learning outcomes. Someone teaching in the middle of a packed biology curriculum make different teaching choices.
All three panelists said that responding to AI presents an opportunity to rethink one's teaching and rebuild one's courses "from the ground up," as David put it. That's a good thing, but David also noted that it's a lot of work.
Strategies for Avoiding AI
In my recap of the November "avoiding AI" conversation, I suggested a few categories of approaches that faculty across higher education have been exploring to avoid unhelpful AI uses in their courses. I've been thinking about that taxonomy since then and listening to faculty like the ones in the February panel for ways to refine the taxonomy. Following is my current draft set of categories.
- Be transparent about the purpose of an assignment. Explain to students what knowledge or skills the assignment will help them develop. When students see the value of the assignment to them, they're more likely to put in the hard work of learning without AI. This works best when students are already clear about why your course will help them meet their professional or personal goals. Giving students choices in topics and formats of assignments can help them see their own interests in the work.
- Design more authentic assessments. These assignments typically feature tasks drawn from realistic professional or disciplinary contexts, involve problems that require students to navigate complexity or ambiguity, and ask students to practice interpersonal skills like communication or team work skills. These ingredients can motivate deep engagement and learning--and resistance to overreliance on AI. Often, there's an interested audience involved, either other students in a course or external audiences.
- Use an alternative grading scheme. By emphasizing process over product and feedback over grades, “grading for growth” practices can decrease students’ inclination to shortcut their learning with generative AI. For example, see Grand Valley State University mathematics professor Robert Talbert’s grading scheme for a recent online course. That’s a whole-course approach, but any well scaffolded, process-focused assignment can tap into the same dynamic.
- Secure your summative assessments. While some assessments might be open to AI use by students, in this approach, your primary evaluations of student learning are secured, like a proctored test or an oral exam. When students know they won’t have access to AI for big assessments, they’re motivated to avoid using AI as a shortcut outside of those secure environments. The University of Sydney assessment framework describes this approach in detail. Process tracking would fall in this general category.
- Move the work into the classroom. This approach isn’t so much about evaluation as it is about creating AI-free learning experiences for students during class sessions. See, for instance, University of South Florida philosophy professor Lily Abadal’s redesign of the traditional term paper assignment in which students do most of the work during class time. This strategy isn’t useful for asynchronous online courses, but could be adapted to online courses with synchronous components.
- Create embodied learning experiences. As Susan Hrach says, we aren't just brains on sticks. Mindful interactions between our students' bodies and their physical environments can improve learning. This can be as simple as asking students to write on paper or draw a picture. It can also involve visiting novel locations (or familiar locations viewed through a novel lens) to create experiential learning opportunities. See the Deconstructing Calculus project for a unique way to do this in online math courses.
- Do all the things. College of Marin writing instructor Anna Mills has written about using AI detection and process tracking, along with process-focused assignments, student choice, social annotation, peer review, and clear messages about the value of assignments. She calls this a "Swiss cheese" approach, while University of Colorado’s Christopher Ostro calls it a "mosaic" approach. For asynchronous online courses, this might be the best option, in spite of the instructor labor involved.
Thanks to panelists MC Forelle, Joshua Earle, and David Singerman, for sharing their experiences adapting their teaching to respond to AI technologies. For more advice on teaching with, about, or against AI, see “Generative AI in Teaching and Learning” on the UVA Teaching Hub.
Header image: "Behavior Power" by Bart Fish & Power Tools of AI, betterimagesofai.org (CC BY)