This guide offers some practical strategies for adapting your assessments in response to widespread student AI use. Rather than prescribing a single approach, we present options you can choose from based on your teaching context and comfort level.
What you will find on this page:
- Three assessment approaches: 1) AI-proof & in-person, 2) AI open or encouraged, and 3) AI-“resistant”.
- Specific examples you can adapt for your courses.
- Current data on how Arts students actually use AI.
- Resources for support and further learning.
What you won’t find on this page:
- Tech-heavy solutions requiring new software skills.
- Pretense that this is easy or that we have all the answers.
- Fully-developed and tested assignment materials.
This guide is a living document and a sincere effort to provide a starting point for instructors looking for help in confronting the challenge of Generative AI for teaching, learning, and assessment in Arts. It is a distillation of academic research, teaching and learning guides, instructor articles in university and teaching and learning publications, our own survey-research on UBC Arts students’ perspectives, anecdotal experience, and more.
The reality of our current teaching moment
AI means we must dramatically rethink how we assess learning… but we can still teach and assess the ideas, methods, and forms of expression that define our disciplines—whether that’s interpreting texts, analyzing individuals and society, performing, or creating new work.
AI tools are much more effective than when they were first released.
- Chatbot tools now offer less formulaic responses and a more natural writing style.
- There are specialized tools that let students upload and chat directly “with” those documents, so responses are much more specific and accurate. Other tools modify AI-created text to read more like a human author has written it (spelling errors, removal of words AI favours like “delve”).
- There are even tools one can use to create ‘evidence of process,’ such as drafts, notes, and document versions, that reflect the start–stop–delete–edit nature of actual writing.
- There are browser plugins that can now answer online quiz questions for students. Remote test writers can also use a second device, like their phone, to get around tools like Lockdown Browser.
AI tools are increasingly embedded in daily life.
- Google searches now provide an AI-generated response.
- Google Docs and tools like Grammarly offer writing assistance while students work on documents.
- A variety of tools are effective at locating research, summarizing scholarly material, and producing maps of citation networks.
- Many publishers now offer AI summaries and AI chatbots that encourage readers to ‘chat’ with journal articles.
- In this environment, blanket AI prohibitions put the burden on students to be certain when the tools they use are employing AI or not.
AI use among students is widespread.
We ran surveys of students in UBC Arts classes in the winter session of 2024-25. These results have important implications for our course design.
- Only 19% of students report never using AI for coursework.
- We asked “In a typical class, what percentage of students do you think use AI in ways that violate course policies”. The average response was 41% in December 2024 and 55% in April.
- We also included a list experiment – an approach to measuring beliefs/behaviours that many survey respondents do not want to share with others – and found that 30% of Arts students admitted to knowingly using AI in ways that violate course policies.
- But many course policies don’t seem realistic or reasonable to students: Students see some kinds of GenAI use a lot like many of us think about driving 110 km/hr when the speed limit is 100. A full 90% of students perceive using AI to “fix spelling and grammar” – even when not permitted – to be either ‘Not a big deal’ (17%) or ‘Totally OK’ (73%). Similarly, 76% feel the same about using AI to format a bibliography, 72% about having it identify weak arguments or reasoning, and 67% about having it suggest alternative phrasing for unclear passages.
- And students use GenAI in many different ways including: brainstorming; outlining, composing, and editing written work; generating test preparation materials; recording and summarizing lectures; summarizing assigned reading material; finding scholarly research; completing research tasks like synthesis and developing literature citation maps, etc.
(Re)designing assessments is time-consuming work.
We’re all feeling the weight of another major shift in our teaching landscape, similar to the challenges we collectively faced during COVID-19, and doing so largely on our own time and energy.
There’s no magic bullet for addressing AI in every course, and honestly, what we implement today may need revision as technology continues evolving. Still, our students deserve thoughtful responses to these changes. Starting small, experimenting, and open and honest conversations with students are a reasonable place to start.
Plus, in Arts, we’re uniquely positioned to help students understand the destabilizing impact of new technologies, ethical considerations, the power relations and consequences in play, and can provide them with the tools to find their own answer to a new question: When AI systems can increasingly create ‘output’ that previously only human beings could, what capacities will always be uniquely human? How can we develop and foster these capacities?
Some Principles of Assessment to Consider
Uncertain Evidence
Any assessment that lacks active, in-person supervision offers uncertain evidence about student learning.
Based on output alone, we cannot reliably identify if/how a student used AI. The University of Sydney suggests a two-lane approach: combining ‘secure’ assessments (Lane 1) with ‘open’ assessments (Lane 2). The basic idea is that for all “open” assessments, AI use cannot be practically prevented. Their whole AI for educators site is quite good.
AI is a Tool
Generative AI is not inherently good or bad for teaching and learning.
There are many compelling ways to use AI to enhance learning. The key questions are whether, when, and how to use it.
Grades Reflect Achievement
Course grades should reflect students’ demonstrated achievement of learning objectives.
We are collectively certifying that Arts graduates possess the knowledge and skills our degrees represent—a certification that programs, employers, and society rely upon.
Student Perspective
Student perspectives, experiences, and constraints should inform our assessment design.
Students are busy, the work is challenging, and a tool exists to make it easier. We must consider their reality when designing assessments.
Motivation is Key
Learning can be difficult, and tackling difficult things requires motivation.
We can discourage over-reliance on AI by providing motivation through relevance, connections to future work, and opportunities to share their work with a broader audience.
Create ‘Friction’
Assessment features can create ‘friction’ that discourages over-reliance on GenAI.
Friction includes any assignment element (like scaffolding or reflection) that makes it harder for students to simply use an AI-generated response without deeper engagement.
Approaches to Assessment You Might Consider
Initial Considerations
Try out the “Define-Remind-Discuss” approach to Gen AI use policies
- Define your Gen AI use policies: specifically and clearly, and be as precise as possible about what use is permitted. For instance, “AI is permitted for brainstorming, but you must write your own paper” leaves unclear whether brainstorming refers to topic ideas, a paper outline, specific arguments, or even “brainstorming” a detailed outline of a subsection. Consider permitting AI use that is relatively light touch, such as using AI-powered literature search engines or tools that provide spelling/grammar suggestions. In addition, blanket prohibitions like “the use of any AI tool is prohibited in this course” may prevent cautious students from using AI to create study materials or as a tool to pursue their interest in course materials further.
- Remind students about your policies. Do it early, often, in-class, on your syllabus, and, ideally, on each assessment. Keep in mind that students have four or five classes, each with many learning activities and assessments, and each course or assessment will have its own AI-use policies.
- Discuss your policy with students. Explain your rationale and why you think specific types of AI use impede or promote their learning. Maybe explain how the skills they develop help them in the future? An in-class discussion can also encourage dialogue that helps students understand your reasoning. At the same time, you learn about their concerns, fostering critical reflection on ethical AI use and signalling that questions are welcome—reducing anxiety about inadvertent policy violations.
Talk with students about Gen AI, learning, and course policies
- Consider starting with the possibility that students are just as concerned about the role of AI in society, their future, and their UBC education as many instructors are.
- Check out our We’re Only Human “Talking with students about Gen AI and Academic Integrity” activity.
Other Considerations
- Consider using asynchronous delivery of ‘lecture’ material to free up in-class time for additional in-person assessment.
- Consider increasing student choice in assessment to increase the chance they find take-home assessments that speak to their own interests, passions, and goals.
“AI-Proof” In-person Assessments
In-class tests and exams have significant pedagogical limitations:
Problems with high-stakes, time-restricted, in-person tests include:
- Often assess speed – rather than depth – of thought;
- Grades reflect capacities we aren’t directly interested in assessing, including test anxiety/stress management capacities and handwriting skill, speed, and endurance.
- Privilege students with stronger English skills who may be able to process and produce language faster than English Language Learners;
- Can be inequitable for students with disabilities and neurodivergence.
- Are very limited in assessing students’ collaborative and iterative learning capacities;
- Capture performance at a single, potentially unrepresentative moment, making them highly vulnerable to temporary setbacks like illness, commuting problems, or even a bad night’s sleep.
Approaches to improving in-class assessments:
Consider in-person assessments that test higher-order thinking skills. For example,
- 40-mins to read and synthesize two shorter passages of text
- 50-mins to write a response to a recent news article that integrates course concepts
- Take-home/return hybrid – Students receive questions, prepare responses offline for 48 hours, then write from memory in a supervised setting.
- Two-sitting analysis – Students are presented with something to analyze using course learning, write on it for 50 or 80 minutes, hand it in, and then the next class session it’s handed back to them and they either rewrite or just supplement what they wrote the first time. The total package is graded.
- Maybe a short assessment where students: edit a paragraph of writing, or outline an essay based on a course topic and specific prompt, or interpret the quality of empirical evidence offered in a Substack article, or analyze a short film, or identify weaknesses and propose specific improvements when presented with a flawed argument or research design, or briefly outline the strongest counter-argument to a given claim and respond to it — testing depth of understanding, and these are only a few possibilities, you will know best what works in your course.
These examples ask students to engage in much of the thinking and activities assessed in longer take-home assignments. In this case, however, they do so in smaller chunks and a more secure environment. We can’t fully replace the learning experience that longer take-home work accomplished, but we can verify that students can do the intellectual work themselves.
Modify test features to address common weaknesses
- To ensure you aren’t assessing ‘speed’, design tests that a quick-thinking and well-prepared student might complete in half the available time.
- Increase student choice over test questions. This decreases the likelihood that a single area of weakness will significantly impact the grade.
- Clearly indicate the amount of time students might spend on each question based on the test duration and the distribution of total points. There’s no need to have them do this on their own.
- Permit students to write in-person tests on their own computers using software, like Lockdown Browser, that restricts their access to AI tools. Note that students with newer laptops can easily run LLM chatbots on their machines without requiring internet access. For UBC Arts courses, our Arts ISIT can provide you with tips and assistance using LockDown Browser. Note that some students use tablets or ‘Chromebooks’ as their primary devices, and some of these are not currently compatible with Lockdown Browser (Fall 2025). Students can borrow devices at the library (https://services.library.ubc.ca/computers-technology/technology-borrowing).
“AI-encouraged” Assessments
Resources for assessments where Gen AI use is permitted, encouraged, or required
These are assessments involving guided student use of suitable technologies when appropriate, ensuring students receive support in using Gen AI and other tools, with their use being fully permitted. In short, students can use any AI model and assignments are designed to promote the efficient, ethical, and educational use of these tools.
- UBC has useful resources at https://ai.ctlt.ubc.ca/resources as well as many helpful seminars and events (https://ai.ctlt.ubc.ca/events/).
- See also the University of Sydney’s list of ‘open’ assessment ideas.
- And check out our We’re Only Human “Developing a Term Paper Topic and Research Question with AI” activity.
“AI-resistant” Assessments
Many instructors may still wish to assign work where Gen AI use is limited to some extent, but where students are not directly observed while completing the work. While these assessments are not secure, there are steps we can take to decrease the likelihood that students will outsource most or all of the work to Gen AI (and thus lose out on the learning experience).
Please note: These assessments are not AI-proof: Students can still use Gen AI to complete these assignments and AI use will be very hard to detect in most cases. These assessments may shift the degree to which students outsource thinking to AI (at both the aggregate and individual level)
There’s an implicit distinction here in need of elaboration. In facilitating learning, AI gets us to quickly get to the important work (examples might include providing suggestions for how to start researching a topic or possible ways to phrase something). In replacing learning, AI does the important work for us (such as answering exam questions). To these I would add a third category: supplementing learning, the murky middle where AI is used alongside or incorporated into one’s own work (such as providing supporting data or creating an essay outline). Naming this usually unrecognized middle ground is important, because whether AI is helping or harming in these cases will often depend on the context & goals. — Derek O’Connell, July 2025.
- “Pair” take-home work with in-person assessment
Combine take-home assignments with follow-up in-person evaluations. Students who complete work with minimal AI assistance will be better prepared for the in-person component. Math provides a useful analogy: students complete homework (“problem sets”) in an unsupervised setting. They face similar questions during invigilated in-person assessment. The link between homework and tests is clear: if you do the homework on your own, you will be much better prepared to write the exam. Some examples:- While grading take-home work, create one question for each student to answer in–class or on the exam, or
- ask all students process reflection questions like “Which concept was most challenging and how did you work through it?”, or
- “Which source cited in your paper stayed with you most and why?”, or
- have students present their work to peers with Q&A sessions, or
- in smaller courses, have students meet with you for a ‘mini-defence’ of, or ‘collegial conversation’ about, their papers.
- Scaffolding: Building Learning Through Structured Steps
Breaking assignments into smaller, interconnected components creates natural friction against AI misuse while also breaking a large, potentially intimidating task into smaller and more manageable chunks. Consider, for example, requiring progressive drafts with evolving requirements. For example: 1) a topic or research question, 2) an annotated bibliography and draft thesis, 3) an outline with topic sentences, 4) a draft and then a final submission.
Not all of these components need to be assessed. Peer-review can be helpful for some stages. Consider having students complete some of theses tasks in-class withoutAI access. - Assessing Process, Not Just Product
An idea closely related to ‘scaffolding’ is requiring students to submit process documentation, such as notes, outlines, drafts, and version histories. This documentation is easy for students who do not use AI to provide. Creating convincing fake process documentation with AI requires multiple prompts, coordination, and time—at some point, it becomes easier just to do the actual work than to fabricate an elaborate paper trail. Assessment of process documentation also shifts focus from final product to learning journey, helping students understand that the assignment’s value lies in intellectual growth, not just submission. Doing so, however, may require considerable additional investment in assessment. - Moving Beyond an “Audience of One”
For a student, knowing your work will be shared beyond just the instructor or TA can increase motivation and accountability substantially. Consider: class presentations with Q&A, digital portfolios or blogs, community engagement projects, end-of-term symposiums, or peer teaching assignments where students create materials for next year’s cohort. Public scrutiny and opportunity for meaningful impact motivate authentic engagement far more effectively than traditional “submit and forget” assignments. - Meaningful Choice and Personal Investment
When students pursue topics that genuinely interest them or connect to their experiences or goals, they’re more likely to be motivated and do the work themselves. Consider offering a menu of options that address the same learning objectives. For instance, public education campaigns, comparative case studies, policy briefs, research essays, debate preparation materials, critical review dossiers, knowledge translation projects, podcasts, and blog posts. Such a choice enables connections with their major, career interests, or personal passions. When students feel ownership over their topic and approach, work becomes personally meaningful. This intrinsic motivation is perhaps the strongest deterrent to AI misuse.
UBC has developed a tool you can use in Canvas that let’s students identify their choices and then automatically calculates the final grade: https://lc.landfood.ubc.ca/flex - Collaborative Learning
Group work fosters shared responsibility for academic integrity while building peer accountability. Effective strategies include: structured group projects with defined individual contributions, collaborative annotation, think-pair-share writing exercises, and jigsaw assignments where each student teaches others their expertise. Consider developing, with students, a process students should follow when a group member uses Gen AI – or engages in other conduct – that violates course/assignment policies.
Arts ISIT has a helpful Collaborative Learning Guide. - Project-Based Learning
Consider semester-long projects where students investigate real problems or questions relevant to your discipline: analyzing local policy impacts, creating podcasts exploring social justice themes, publish a literary magazine featuring original creative writing alongside critical essays about contemporary social issues, combining research skills with digital storytelling, documenting community histories, curate exhibitions exploring cultural identity in their community by conducting oral history interviews, evaluating institutional practices, co-creating artistic performances. Ideally, students can present their work to audiences, stakeholders, and communities beyond the classroom. The extended timeline with multiple check-ins makes it difficult to outsource the work at the last minute. At the same time, presentations to real audiences create accountability that motivates students to understand their material deeply. When students realize their work can influence decisions or contribute to community knowledge, they may be more invested in doing the thinking themselves. - Consider tweaking outright AI prohibitions.
Blanket bans on Gen AI use for a course have a few limitations: they seem unrealistic and outdated to many students, they create uncertainty and anxiety for students who use AI to promote learning but fear misconduct allegations, they place the burden on students to identify when Gen AI is involved in standard tasks like: checking spelling & grammar, formatting citations, locating research sources, etc. More specific and detailed policies can help students understand that the AI restrictions you choose are designed to promote learning and to help them understand the tradeoff between efficiency gains and learning losses.
Check out the Design-Remind-Discuss approach to course policies.