The goals of this project include:
- Establish a sustainable network of faculty and staff to collaboratively respond to evolving challenges of generative AI across the Faculty of Arts.
- Develop and evaluate a wide range of adaptable learning materials related to generative AI and writing that build on scholarship related to academic integrity, writing, and other relevant fields.
- Distribute “pulse checks” (up-to-date assessments of generative AI, writing, and academic integrity) to the Faculty of Arts. These will include reports on technological change, perspectives/concerns from Arts students and faculty, and a list of available resources.
News and Updates
01/22/2025: AI Resources Handout
AI & Academic Integrity Resources Handout: This guide provides tools for educators on AI policy, assessment design, and integrating generative AI into teaching. Explore strategies, examples, and insights to navigate AI in education. Click here to download!
12/19/2024: Only Human GenAI & Writing Courses – T2 Tips & Materials
Dear Colleagues,
Earlier this year, you let us know you are interested in learning more about our project, We’re Only Human? Educative Frameworks for Artificial Intelligence, Academic Integrity, and Writing in the Faculty of Arts. We are writing now to share an update, including some early insights and road-tested learning materials that we hope will be helpful to you in the new year.
Over the fall semester we’ve worked with both instructors and students in writing courses to learn more about how we can continue to teach, learn, work, and study with integrity in the era of GenAI. Reassuringly, we are seeing that many of the practices and principles of an educative approach to fostering academic integrity continue to work. Though those lessons may be familiar, we think they are worth repeating to remind us that we can keep building on our existing expertise!
Our instructors also continued to have cases of misconduct, when students used GenAI in ways that didn’t align with the expectations for appropriate use. These cases remind us that we won’t ever eradicate misconduct, unfortunately. But in the illuminating and thoughtful discussions with other students in our courses, we’ve also been reminded that we do have the agency and opportunity to create the conditions for more students to choose to do their work with integrity, and to help them do so.
We share now some key lessons so far, and four learning activities you can adapt/adopt in your own courses.
Take-Aways – In our courses, we can:
- Explain and discuss your Generative AI use policies early, often, and with rationales. We can support our students by providing detailed guidelines and revisiting these policies frequently. Many students lack a clear understanding of what constitutes academic integrity and may feel anxious about appropriate GenAI use; students reported being afraid even to ask questions about it in case their professor assumed they were “cheating.” They also encounter multiple, varying policies across different classes, which causes additional confusion and anxiety, and may foster a sense that the rules are arbitrary if we aren’t explicit about our expectations. If you are not allowing GAI use, we also strongly recommend that you explain why you see certain GenAI uses as inhibiting the learning your course requires them to develop and demonstrate. When students understand the educational rationale behind these rules, they are more likely to follow them, and the chance to ask questions without fear of repercussion builds trust.
- Be strategic about how we talk about GAI tools in our policies. For example, “ChatGPT” and “GenAI” are not interchangeable terms, and policies referring only to ChatGPT may unintentionally permit other tools (like Google’s Gemini or Microsoft’s Co-Pilot). The GenAI landscape is rapidly expanding. Instead of naming particular tools, be explicit about which GenAI tasks are acceptable and which are not. Pay particular attention to where “editing/polishing” fits in. Many students use AI-driven tools like Grammarly and see them as legitimate aids, akin to spell-check or built-in grammar checkers.
- Acknowledge that truly “AI-proof” (take-home) assignments do not exist. Students with sufficient skills can use AI to complete all or most of nearly any task. Their proficiency with GenAI varies widely, and research suggests instructors are not particularly skilled at distinguishing AI-generated from student writing. Skilled users collaborate with AI tools to produce human-like output, making detection even harder. GenAI tools are increasingly embedded in basic software. While this realization may be a little defeating, we hope that accepting this reality may help us shift our energies – and we continue to examine how assessment in our courses needs to shift, too.
- Require reflective use disclosure statements on assignments to encourage thoughtful application of GenAI tools and prompt reflection on when and how their use is productive. Our experience demonstrated that teaching students proper citation practices for GenAI will make them (and their instructors!) more aware of how AI shapes their written production. (The UBC library GenAI citation guide was very helpful: link.) Several instructors on our team trialed a disclosure statement or acknowledgment with commentary and noted that this requirement not only follows the scholarly integrity practices of many research publications, but also ensured that students reviewed, again, the course policies and expectations for permissible GenAI use in the assignment. Be sure to explain potential benefits and consequences for specific types of use.
- Recognize the value and effectiveness of direct engagement with our students. Our experiences this semester reinforced for us the value of creating and sustaining conversation with and between students about the work they are doing. Such interactions as 1:1 consults in office hours, in-class collaborative reflection, and peer review sessions throughout the semester provided opportunities for check-ins, just-in-time support, and reflection that built students’ confidence and developed their investment in the learning process, as part of a community. This relational work also meant that we could identify misuses of GAI in assignment submissions.
We’d love to hear from you about your own takeaways from this semester. If you’re willing to share your insights, please contact our team.
Learning Materials
Feel free to tailor these materials as you see fit. If you choose to use any of them, we’d be grateful if you’d connect with us by contacting our team. We would appreciate the opportunity to help evaluate their impact in your class.
Our evaluation process requires minimal effort from you. We would simply ask that you circulate two emails to your students and allow us a brief moment at the start of class to invite them into a focus group.
Please note that our evaluation focuses solely on the materials, not on you. We’ll provide you with a detailed summary of the results, but we won’t share anything that could identify you, your class, or your students.
You can find all of these resources at: https://woh.arts.ubc.ca/learning_resources/
Developing a Term Paper Topic and Research Question with an AI Assistant
Developed by Micheal Jerowsky
This resource is an educational activity designed to guide students in utilizing AI tools to select term paper topics and develop research questions. It fosters critical engagement with AI, enhances research skills, and promotes digital literacy through structured interaction and evaluation of AI-generated suggestions.
Educational Resource: Indigenous Knowledge & AI Part 1 & 2: Data Sovereignty
Developed by Laila Ferreira
This resource introduces students to Indigenous knowledge & its representation by AI, focusing on the values, practices, and protocols inherent in Indigenous ways of knowing. Through readings, discussions, and AI-based activities, students critically evaluate how Large Language Models define and convey Indigenous knowledge, highlighting gaps, ethical concerns, and the need for cultural context. The activity promotes ethical AI use, critical thinking, & engagement with Indigenous perspectives in a digital age. Please note that this activity has two parts, and in its current form, requires 2-3 hours of class time.
Finding and Evaluating Scholarly Sources with AI tools
Developed by Rebecca Carruthers Den Hoed
This resource is an activity designed to teach students how to find and evaluate scholarly sources using AI tools, UBC Library’s Summon tool, & Google Scholar. Students compare the effectiveness of these tools in locating peer-reviewed sources, reflect on their strengths and limitations, and learn strategies for integrating these tools into their research practices. The activity aims to enhance students’ digital literacy and critical thinking skills while emphasizing informed tool selection.
The Two AIs: Talking with Students about Generative Artificial Intelligence and Academic Integrity
Developed by Moberley Luger; based on an activity developed by Laurie McNeill
This resource provides activities for discussing generative AI and academic integrity with students, focusing on fostering a shared understanding of integrity rather than punitive measures. It encourages group discussions, critical reflections, and collaborative development of definitions and perspectives on AI’s role in academic work. The activity emphasizes aspirational engagement with academic integrity and addresses the complexities of using generative AI tools responsibly.