Generative AI is now woven into many students’ academic lives — but how often are they using it, what do they think of course policies, and how do they decide when to follow (or break) the rules?
To find out, we surveyed UBC Arts students at the end of Fall 2024 (Winter Term 1) and Spring 2025 (Winter Term 2). We asked them about their experiences with course AI policies, their own patterns of AI use, and how different assignment designs might shape their choices.
Our Approach
- Who we surveyed: Students enrolled in courses offered by the Faculty of Arts. This included both Arts majors and students from other faculties taking Arts courses.
- How we selected students: Participants were randomly selected from all undergraduate Arts enrolments and the chance of being selected increased with the number of arts courses a student too.
- Incentives: To encourage broad participation, we offered gift cards averaging about $7.50 CAD in value, including smaller guaranteed cards ($5 or $10) and chances to win one of eight $100 prizes. In total, we have 480 respondents (300 in Dec ’24 and 180 in Apr ’25)
This approach yielded a diverse and representative snapshot of how Arts students are engaging with Generative AI in their coursework.
In This Series
Over the next six posts, we’ll share what we learned:
- 
            
                1. How students learn about AI policiesWhile 78% of courses have a written AI policy, only about half of instructors explain the reasoning behind it, leaving many students confused about the rules. 
- 
            
                 2. What students think about those policiesA strong majority of students (75%) believe unrestricted AI use harms their learning, yet many also feel that total bans are a hindrance, showing a desire for a balanced approach. 
- 
            
                 3. How often students use AI — and for whatAI use is widespread, with only 19% of students never using it for coursework; it’s most popular for summarizing readings, a task 32% of students use it for multiple times a week. 
- 
            
                 4. How often they break the rulesRule-breaking is perceived as common, with students estimating 55% of their peers violate policies, and our research indicates nearly a third (28%) admit to knowingly doing so themselves. 
- 
 
                 4. What rules seem reasonable to students?Even when instructors prohibit Gen AI use, many students see some use as entirely acceptable. 72% said having AI identify weak arguments or unclear reasoning is ‘not a big deal’ or ‘totally ok’. 
- 
            
                 5. How students use AI in essay writingStudent approaches to AI in essay writing are polarized: while 20% avoid it completely, a significant group of nearly 15% lean on it for every stage of the writing process we surveyed. 
- 
            
                 6. What assignment design can (and can’t) doAssignment design can significantly curb AI misuse; the perceived use of AI detectors reduced student temptation by 32%, while explaining an assignment’s real-world relevance cut it by over 20%. 
Together, these findings shed light on where AI is supporting student learning, where it’s short-circuiting it, and what instructors can do to respond.
How Students Learn About AI Policies
Nearly 80% of Arts courses now include an AI policy on the syllabus — but far fewer students say their instructors ever talk about it in class.
Students are navigating a patchwork of rules on how they can (and can’t) use Generative AI. Some courses post detailed policies, others just a line or two, and still others offer no guidance. This inconsistency leaves many students unsure where the boundaries actually are.
Our Results
Why It Matters
Students are understandably confused about the variety of policies in their many courses and some are very concerned about unintentionally violating a course policy and ending up in trouble. Beyond confusion about rules, some students don’t understand why we choose the policies we do. AI is increasingly embedded in many online tools and software and it has been part of most students lives for four years – for many, it is more like Wikipedia or Cliff Notes than some sort of cheating machine. We need to explain why we think certain types of use will impede their learning and how this learning will benefit them in the short and longer term. Without a clear explanation of how AI use connects to course goals, policies can feel arbitrary.
Finally, talking about our policies also creates an environment of collaboration and openesss. some students worry that asking questions about policies makes them look suspicious, like they’re “trying to find loopholes.”
The takeaway: posting a rule on a syllabus isn’t enough. Students are more likely to respect and follow policies when they’ve had the chance to hear, question, and understand them.
The Bottom Line
Most instructors are posting AI rules, but many stop there. Students are left with limited context, and the patchiness across courses only adds to the confusion. If the goal is to reduce misuse and support learning, the next step is simple: Define-Remind-Discuss: define your policies carefully, remind students about them in multiple ways, and discuss the specifics and rationale with students.
What Students Think About AI Policies
Three out of four students believe unrestricted AI use makes their learning worse — but many also think total bans hold them back.
Our Results
- 75% of students said “no restrictions” on AI hurt their learning. Only 13% thought it helped.
- 60% felt allowing AI for brainstorming promoted learning; 57% said the same about light editing.
- 43% said total bans hinder learning, while 35% said they promote it.
- 45% said citation policies are helpful; 26% said they were harmful.
- 71% said unrestricted AI fails to ensure grades reflect real ability.
Why It Matters
These results highlight a tension: students often believe AI can support learning — but only in carefully scoped ways. They recognize that unlimited use undermines both learning and fairness in grading. They are much more divided, however, on blanket prohibitions on AI use. Many of them see such bans as obstacles to learning. Students and instructors would both benefit from sharing their perspectives and considering how these technologies help and harm learning.
The Bottom Line
Students are wary of both extremes. They don’t trust “anything goes” approaches, but they also doubt that blanket bans serve their learning. Limited permissions — especially for brainstorming and light editing — strike them as the most balanced path. For instructors, the lesson is clear: when setting AI rules, avoid one-size-fits-all policies. Instead, be explicit about what limited uses are acceptable, and connect those choices to both learning goals and grading fairness.
How Are Students Using Generative AI and For What?
Only one in five Arts students never use AI. Most fall somewhere in between, with “sometimes” being the most common answer.
AI use is now routine for many students, but not in a single, uniform way. Some rarely touch it, others use it daily, and many fall in the middle. And when they do use AI, it’s most often for course-related tasks — though how they use it varies widely.
Our Results
- Frequency overall: 19% never used AI for coursework, while 10% said they used it daily. The rest spread across “rarely,” “sometimes,” and “often,” with sometimes as the most common choice.
- Compared to other uses: Students reported using AI more for coursework than for entertainment, general questions, friendship/companionship, or paid work.
- Tasks most often supported by AI: Clarifying course material, summarizing readings, and writing short assignments like discussion posts.
- Summarizing readings: 32% of students used AI to summarize readings daily or several times a week; only 28% said they never used it for this.
- Short-answer writing: 28% said they used AI to draft short written responses at least a few times a month; 45% said they never did.
- Study aids: 41% never used AI to generate practice tools like flashcards or quizzes — despite these being among the most effective learning uses.
Why It Matters
There is no single “typical” student when it comes to AI use. Importantly, students are turning to AI for many of the learning related tasks they undertake during a course: clarifying concepts, summarizing readings, and even completing small assignments. That raises questions about how much of this work students are doing themselves — and whether AI is being used as a shortcut or a support.
Meanwhile, relatively few are using AI to create practice materials, even though this might be one of the technology’s most beneficial applications. The gap suggests uncertainty: students may not know whether study aids are allowed under broad course prohibitions, or they may simply not recognize the learning value.
The Bottom Line
AI use is widespread but uneven. Students are most likely to use it to lighten the heavy lifting of reading and writing, while underusing it as a study tool. The challenge is to recognize that what students see as “helpful” may not always align with what supports genuine learning.
How Often Students Break the Rules
By April 2025, students thought more than half their classmates were breaking course AI rules. Nearly one in three admitted they had done so themselves.
A Note on the ‘List Experiment’
To get more honest answers, we used a survey technique called a ‘list experiment’. We estimate 28% of students knowingly used AI in ways that violated course policies. This method reduces social desirability bias when asking about sensitive behaviors.
Click for methodology & data
Average estimated percentage of students in a typical class who use Generative AI in ways that violates course policies. Average estimate was 41% in December 2024 and 55% in April 2024.
For the ‘list experiment’ https://methods.sagepub.com/ency/edvol/encyclopedia-of-survey-research-methods/chpt/listexperiment-technique#_=_ is a survey technique designed to reduce social desirability bias when asking about sensitive behaviors. Respondents are all provided with a list of behaviours or statements and asked to indicate only the number of statements that are accurate about themselves. For instance, two of the items in our list were “I have a big problem with procrastination” and “I’m very concerned that AI will make it harder for me to find a job I want”. Respondents are randomly assigned to see one of two lists. The treatment group sees the same four items as the control group along with the key statement of interest. In our case “I knowingly used GenAI in a way that violated course policies this term”. By comparing the average number of items selected across groups, researchers can estimate the prevalence of the sensitive behavior without requiring respondents to directly admit to any potentially socially undesirable statement or behaviour.
In the control group, the average number of items selected was 2.5 of the 4 listed statements. The average in the treatment group, with 5 statements, was 0.3 higher at 2.8. Accordingly, we can estimate that 30% of our respondents agreed with the statement “I knowingly used GenAI in a way that violated course policies this term”.
| Group | Avg. Items Selected | s.e. | p-value | 
|---|---|---|---|
| Control Group (4 items) | 2.50 | 0.60 | |
| Treatment Group (5 items) | 2.78 | 0.58 | |
| Difference | +0.28 | .08 | 0.0007 | 
Why It Matters
Students are not just worried about isolated incidents. They increasingly believe that breaking AI rules is common — even normal. And when nearly a third admit to doing it themselves, it’s clear that misuse is not limited to a small group.
These findings also highlight a trust issue: when students assume their peers are breaking rules, they may feel pressure to level the playing field by doing the same.
Moreover, social norms shape how people respond to rules. If students believe “everyone is doing it,” policies lose their power — even clear, well-communicated ones. Effective responses need to account not only for individual choices but also for the culture of what students think is normal.
The Bottom Line
Misuse of AI isn’t rare or exceptional. It’s frequent enough that instructors can’t assume policies alone will prevent it.
If we want to reduce violations, we’ll need more than rules — we’ll need approaches that address both student behavior and student perceptions of fairness.
How Do Students Draw the Line Between Acceptable and Unacceptable GenAI Use?
Students don’t see all GenAI violations as equal. Minor editing often feels harmless — even if technically against the rules — while full outsourcing is judged as clear misconduct.
Many students treat “light” AI use like speeding a few kilometers over the limit: technically wrong, but not a real problem. Our survey shows that while small-scale editing or formatting feels acceptable to most, outsourcing full essays is almost universally condemned.
Our Results
- Grammar and polishing: 90% said using AI to polish grammar was “not a big deal” or “totally OK.”
- Formatting bibliographies: 76% felt the same way about AI formatting citations — though slightly more saw this as problematic compared to grammar.
- Substantial writing help: Majorities said AI feedback on weak arguments (72%) or rewriting unclear passages (67%) was “totally fine” or only a minor issue.
- Complete outsourcing: Clearly over the line — 83% said “totally not OK,” and another 12% called it “wrong and a big deal.”
- Middle ground: Using AI for short discussion posts was judged less serious than outsourcing full papers, but still a notable violation.
Why It Matters
These findings highlight a mismatch: instructors may see any AI use as a serious violation, while students view some forms as harmless. Without explicit dialogue, expectations and student norms will collide — leaving both sides frustrated.
The Bottom Line
Students see some GenAI use as normatively acceptable even when rules forbid it. Policies need to spell out which uses cross the line — and why — if they are to be respected and effective.
How Students Use GenAI When Writing Outside Class
When it comes to essays, many students don’t touch AI — but nearly 15% use it for every task we asked about.
Students’ use of AI in essay writing is really varied: a substantial group avoids it altogether, while another group leans on it at nearly every stage. Most fall somewhere in between, using AI selectively for lighter tasks.
Our Results
- Never users: 20% of students reported never using AI for any of the essay-related tasks we asked about.
- Heavy users: Nearly 15% said they used AI for every task we asked about while writing some of their papers
- Selective users: Most students fell between these extremes, with varied levels of use.
- Tasks where AI use is highest: Brainstorming, light editing, checking grammar/spelling, and reviewing organization/flow.
 
    Patterns of students’ reported essay-related AI use. The chart on the left shows how often, on average, students said they used AI for different writing related tasks. The chart on the right shows what share of these writing-related tasks each student said they never used. So 0 on this chart means they use AI for all the tasks we listed
Why It Matters
These results underline that there is no single student approach to AI in essay writing. Some students avoid it completely, while others embrace it as a near-constant companion. Most use AI in targeted ways — often for the lighter tasks that feel like support rather than substitution.
The Bottom Line
AI is part of essay writing for many students, but in uneven ways. While only a few rely on it to generate entire papers, a meaningful share integrate it into multiple stages of the writing process.
What Assignment Design Can (and Can’t) Do
Results from our experiment suggest one effective way to discourage AI may lie in assignments that have a clear and compelling rationale.
Instructors have tried many tweaks to make essays “AI-proof”: adding reflections, scaffolding steps, or requiring specific sources. Our experiment tested how students themselves say these features affect their temptation to break the rules. The results show that some approaches work better than others — and some hardly move the needle at all.
What we did:
In this experiment, students are shown pairs of different essay assignments and asked to pick which assignment they would be more likely to use Generative AI to help complete. This design, called a conjoint experiment conjoint analysis, allows researchers to estimate the impact that different characteristics of choice have on people’s preferences.
Imagine, for example, showing citizens a summary of two political candidates with different combinations of policy positions, experience levels, and personal backgrounds, then asking which candidate they would be more likely to vote for. By systematically varying these attributes across many choice scenarios, we can isolate how much each individual characteristic (like economic policy stance or age) influences vote choice, even when multiple factors are at play simultaneously. We took a similar approach but varied the attributes of different take home written assignments.
 
    
        An example question students answered during this part of the survey.
    
Our Results
The results from this experiment range –1 to 1 and can be understood as the impact of a particular attribute (e.g. 5% assignment) relative to a baseline (in this case, 20% assignment).
- 
        Assignment Features: Compared to a ‘research essay’ with no additional features:- Adding required course material reduced AI temptation only slightly (–0.04).
- Scaffolding (proposal, outline, final) had a modest effect (–0.07).
- A short reflective writing piece had the strongest effect in this group: a 13% decrease in temptation.
 
- 
        Grade weight: Compared to an assignment worth 20%:- AI temptation increased by 12% for a low-stakes 5% assignment and increased by 11% for a 40% paper. Higher stakes assignments lowered AI appeal.
 
- 
        Stated purpose: Compared to an assignment where the rationale was not provided:- Assignments with a clear rationale caused less AI temptation when compared to an assignment with no rationale. Specifically: “skills for future jobs” (-21%) “preparing for exams” (-21%) “real-world connections” (-15%)
 
- 
        Detection approaches: Compared to an instructor who states “I trust you to follow these rules”, when an instructor said:- “I can tell when students use AI — I reported 5 this term” AI temptation decreased by 21%.
- “I will randomly select 10% of students for meetings about their paper” temptation fell 16%.
- “I will use AI detectors and meet with any students who are flagged” decreased temptation by 32%.
 
Why It Matters
Not all assignment tweaks are equally effective. Small design changes, like scaffolding or required sources, don’t shift student behavior much. But adding friction (like reflection writing), raising the stakes, and showing students why the work matters all help reduce temptation.
Perhaps most important, oversight matters. Students respond to credible monitoring. When they know instructors will follow up, the rules can carry real weight.
The Bottom Line
We can’t AI-proof take-home assessments, but some assignment features and course policies can discourage over-reliance on Generative AI. The most effective strategies combine:
- Higher stakes (so the work feels worth doing authentically)
- Clear purpose (so students understand why it matters)
- Credible oversight (so students believe the rules are enforced)
Together, these factors reduce the appeal of cutting corners with AI far more than changes to required elements of the assignments alone.



 
        

