Learning Technologies

Avoiding AI

Strategies for Keeping AI (Mostly) Out of Your Course

By Derek Bruff

Thursday, January 8, 2026

On November 19, 2025, the Center for Teaching Excellence hosted a virtual lunch-and-learn titled “Avoiding AI: Strategies for Keeping AI (Mostly) Out of Your Course.” The event was part of the CTE’s ongoing efforts to help faculty and other instructors make informed, intentional choices as they adapt their teaching to new generative AI technologies. Some instructors are choosing to embrace AI, some to explore it cautiously with their students, and others to keep it out of their courses. The November lunch-and-learn focused on that latter choice and what it looks like in practice.

Why Avoid AI?

The event was facilitated by CTE associate director Derek Bruff, who started the discussion by outlining a few reasons why an instructor might want their students to avoid using AI:

  • AI can undermine student learning. When students outsource a key learning task like making sense of a reading, taking notes on a lecture, or writing an essay, students miss out on opportunities to deepen their understanding and build their skills. There are more nuanced ways to use AI, but cognitive offloading of this sort can prevent learning.
  • In some cases, AI doesn’t help students learn. “There is nothing that I ask my students to do in my classes that benefits from being done by generative AI,” wrote Knox College history professor Cate Denial in January 2025. “There is no added value to my courses or my students’ learning.”
  • Responsible use of AI is problematic. Generative AI has problems with truth and bias and sycophancy. It raises questions about intellectual property and student privacy. And many consider its environmental impacts not worth whatever utility the technology has. See Leon Furze’s “Teaching AI Ethics” series for an exploration of these problems.

The lunch-and-learn featured two panelists, each of whom shared their reasons for having students avoid AI. James Wyckoff, professor emeritus of education and public policy, teaches an upper-level undergraduate course on education policy. He asks his students to write short policy memos during the course, memos that new AI tools can write very well. He’s concerned about students outsourcing the research and thinking he wants them to do in the course, and so he tells his students not to employ AI as a substitute for their own skill development. 

Ethan King, assistant professor of English, teaches a first-year undergraduate writing course on attention and distraction with a “red light” policy against using AI. He’s teaching academic fundamental skills in this course. “These skills only develop through doing the work itself,” Ethan said, “even if it feels slow and messy.” Also, when students turn to AI for academic support, that disrupts the learning community he’s trying to develop for these students new to college.

How to Avoid AI?

What strategies are there for directing students away from the use of AI in course assignments and activities? Derek noted that simply banning AI is complicated. It’s all over the place. You’ll find it summarizing Amazon product reviews and on LinkedIn, offering to help you write a post, and in Adobe PDF Reader: “This appears to be a long document, save time by reading a summary using AI assistant.” You can’t just tell your students not to use ChatGPT.

Detecting AI use is challenging. There are AI detectors, but they come with false positives, saying a piece of text is AI-generated when it is not, which can lead to messy investigations. They also come with false negatives, saying something is human-generated when it’s not, because there are lots of AI tools designed to avoid AI detectors. This is why UVA discourages instructors from using Gen-AI detectors for the purpose of detecting academic fraud.

Also, trying to ferret out AI use isn’t much fun. As one participant wrote in the chat, “I have no interest in being an AI cop.”

What then to do? Derek described a few general approaches that faculty across higher education have been exploring for avoiding unhelpful AI use in their courses.

  1. Appeal to students’ intrinsic motivation to learn. Be transparent with the purpose of your assignments and how they will help students build the knowledge and skills they’re interested in developing. This works best when students are already clear about why your course will help them meet their professional or personal goals. This approach can also include giving students more choice in assignment topic or format since this kind of agency can enhance motivation.
  2. Use an alternative grading scheme. By emphasizing process over product and feedback over grades, “grading for growth” practices can decrease students’ inclination to shortcut their learning with generative AI. For example, see Grand Valley State University mathematics professor Robert Talbert’s grading scheme for a recent online course. That’s a whole-course approach, but any well scaffolded, process-focused assignment can tap into the same dynamic.
  3. Secure your summative assessments. While some assessments might be open to AI use by students, in this approach, your primary evaluations of student learning are secured, like a proctored test or an oral exam. When students know they won’t have access to AI for big assessments, they’re motivated to avoid using AI as a shortcut outside of those secure environments. The University of Sydney assessment framework describes this approach in detail. Process tracking would fall in this general category.
  4. Move the work into the classroom. This approach isn’t so much about evaluation as it is about creating AI-free learning experiences for students during class sessions. See, for instance, University of South Florida philosophy professor Lily Abadal’s redesign of the traditional term paper assignment in which students do most of the work during class time. This strategy isn’t useful for asynchronous online courses, but could be adapted to online courses with synchronous components.
  5. Do all the things. College of Marin writing instructor Anna Mills has written about using AI detection and process tracking, along with process-focused assignments, student choice, social annotation, peer review, and clear messages about the value of assignments. She calls this a “Swiss cheese” approach, while University of Colorado’s Christopher Ostro calls it a “mosaic” approach. For asynchronous online courses, this might be the best option, in spite of the instructor labor involved.

Case Study: Leaning into Oral Exams

Panelist Jim Wyckoff was concerned that his education policy students might rely too much on AI for the policy memos he assigns them, so he added a brief oral exam at the end of his course (approach #3 above). Each of his 35 students signs up for a 10-minute time slot to meet with Jim for a “policy briefing.” He asks his students a few questions to see if they understand what they wrote in their final policy memos. Jim found the meetings helpful for both “assessing their understanding of the substance” of their memos and “employing a different mode to convey their knowledge.” He plans to continue using these policy briefings, perhaps expanding them to 15 minutes each.

Adding this secure assessment prompted Jim to realize that oral communication of education policy matters was, in fact, a tacit learning objective in his course. He made that an explicit learning objective, and started identifying other points during the course when student could practice this skill, such as brief class presentations of articles, small-group discussions, structured class debates, and group presentations. Communication has become a meaningful part of the course grade, and students say that while public speaking causes some anxiety, they’re grateful for the chance to improve their skills in this area. 

With the policy briefings in place, Jim was comfortable letting his students use AI on their memos. They have to draft their memos without AI, then they’re invited to use AI when revising their drafts. Of the memo grade, 40% is from that first student-generated draft and 60% is from the second, perhaps AI-assisted draft. To help them use AI thoughtfully, Jim spent a class session helping them improve their skills in writing AI prompts and showing them the limitations of generative AI for this kind of work, sharing examples where AI-written policy briefs were uncreative and not specific enough to be useful (approach #1 above).

Case Study: Process over Product

Ethan King argued that to figure out how to help your students avoid using AI, you need to understand why they turn to AI in the first place. He said it typically isn’t laziness, but instead a sense that AI will produce better quality work. Students often doubt their own skills, especially when they are new to college. Other students turn to AI because they see education as a “transactional process.” That is, they’re more concerned about scoring grades than learning, so they use AI to get an edge in that process.

As a result, Ethan uses an alternative grading system in which students get credit for completing the various steps in a writing process (approach #2 above). Ethan described his system as a combination of specifications and labor-based grading, for those familiar with those terms. His goal in this system is to shift students attention away from the final product of writing and toward the process of writing itself. By giving students opportunities to revise their work along the way, he lowers the grading stakes, allowing students to focus on progress rather than perfection.

Ethan also designs assignments where students can find a clear sense of purpose and relevance (approach #1 above). The course is on attention and distraction, so Ethan will ask students to visit a physical space on Grounds and pay attention their emotional responses or to keep a journal of their social media use and reflect on the patterns they see. The writing they then do is grounded in their personal experiences, which hopefully motivates them to avoid AI so they can better understand those experiences. 

Case Study: In-Class Writing

Ethan also mentioned that he’s been moving some of the writing process in his course into his class sessions (approach #4 above), inspired by recent presentations by his English department colleague associate professor Jim Seitz. Jim has moved almost all of the writing in his courses into his classrooms. Students arrive in class, Jim gives them a writing prompt, and the students write. This isn’t a tech-free environment; they’re welcome to use their laptops to write, as long as they use the “focus” mode of their word processor and turn their Wi-Fi off. Jim saves some time at the end of class to discuss and debrief the day’s writing experience.

Jim Seitz was on the call at the lunch-and-learn, and he argued that we sometimes focus too much on the “language of skills” when talking with students about the need to avoid AI. He said that employers are likely to want students to have some skill in using AI, so he doesn’t downplay skill development. He does, however, talk with students about “engaging in processes that enhance [their] lives.” Instead of framing education in terms of future value, he tries to help them understand how writing (without AI) can be valuable to them here and now.

In an earlier CTE workshop, Jim said that not every course needs to move student writing into the classroom, but he argued that students should have opportunities to take courses that create AI-free environments like his. He has a vision for “focus” courses across the curriculum, courses that are designated as low or no tech. He also pitched the idea of “writing labs,” where students engage in in-person and perhaps communal writing in sessions attached to more traditional courses. 

Thanks to panelists James Wyckoff and Ethan King, as well as Jim Seitz, for sharing their experiences adapting their teaching to respond to AI technologies. For more advice on teaching with, about, or against AI, see “Generative AI in Teaching and Learning” on the UVA Teaching Hub.

Header image: "Brain Control" by Bart Fish & Power Tools of AI, betterimagesofai.org (CC BY)