Curious about how AI is reshaping writing and academic integrity? So are we. This is where we collect and share thought-provoking articles, blog posts, and academic research that explore the complex relationship between these topics. This evolving collection helps inform our work, and we hope it sparks ideas in yours too.
A.I. School Is in Session: Two Takes on the Furute of Education
The episode explores how AI is reshaping education from K–12 to universities: Alpha Schools use AI-driven personalized lessons (about two hours daily) and “guides” to focus the rest of the day on life skills, while Princeton’s D. Graham Burnett argues AI could push higher ed—especially the humanities—toward deeper, purpose-driven inquiry or even disrupt the traditional university model. Student voices show AI’s everyday role: boosting comprehension, automating study workflows, and aiding translation, but also enabling corner-cutting, causing false plagiarism accusations, and raising equity concerns. Overall, AI is now “in the water” at school—powerful for motivated learners, problematic when misused, and forcing institutions to rethink assignments and value.
Read MoreWriting Responsibly: Considerations for Academic Writing in the Time of Generative AI
Dr. Laurie McNeill emphasizes that writing is fundamentally a socially situated act—it responds to specific audiences and purposes within particular cultural and temporal contexts, and it plays an active role in shaping human relationships. She also stresses the importance of understanding what “doing one’s own work” means, especially when using generative AI tools like ChatGPT: writers must be aware of how these tools operate and carefully consider how their use aligns with the expectations of the academic community and their own ethical responsibilities.
Read MoreWill AI Usher In the End of Deep Thinking?
Derek Thompson speaks with Georgetown professor and author Cal Newport about how the rise of AI raises urgent questions about our intellectual habits and what it means to think deeply. Newport shares that students aren’t merely outsourcing essays to AI for efficiency; rather, they engage in pseudo-conversational prompts designed to reduce the mental strain of writing, potentially eroding their capacity for deep, reflective thinking.
Read MoreIn the AI era, how do we battle cognitive laziness in students?
Students risk falling into cognitive laziness—delegating critical thinking and complex reasoning to AI tools—thereby compromising their own engagement with foundational concepts. This article argues for embedding metacognitive scaffolding into AI literacy curricula: strategies like reflective tasks, planning prompts, and task decomposition to guide students in actively evaluating and regulating their AI use.
Read MoreTalk is cheap: why structural assessment changes are needed for a time of GenAI
This article examines how universities are responding to generative AI by introducing policies and frameworks that rely heavily on student compliance. The authors argue that these discursive approaches are not sufficient to maintain assessment validity. Instead, they advocate for structural changes that redesign the assessment itself to address the challenges posed by AI.
Read MoreUniversities need to ‘redefine cheating’ in age of AI
Generative AI is challenging traditional definitions of cheating by enabling students to use tools in ways that are difficult to classify as either support or misconduct. A recent study found that while some students openly admitted to using AI to complete assignments, others used it more subtly, such as to check their understanding or improve writing. The findings highlight the need for universities to rethink academic integrity policies and create assessment strategies that reflect the realities of AI-assisted learning.
Read MoreHow vulnerable are UK universities to cheating with new GenAI tools? A pragmatic risk assessment
Most UK university students report using generative AI tools, often in the context of assessments that are highly vulnerable to misuse, such as unsupervised exams and essays. About one in five students admitted to cheating with AI in the past year, though many of their behaviors fall into grey areas that challenge traditional definitions of misconduct. This study highlights the urgent need for clearer academic integrity policies and assessment methods that reflect how students are actually using AI.
Read MoreCalifornia went big on AI in universities. Canada should go smart instead
Simon Bates argues that Generative AI may streamline tasks, but rushing to fully adopt it in universities risks undermining essential learning processes like critical thinking, resilience, and deep focus. He contrasts California State University’s sweeping AI rollout with a measured Canadian approach that centers on five principles—culture, rules, access, familiarity, and trust—to integrate AI thoughtfully into education.
Read MoreAsking a More Productive Question about AI and Assessment
Generative AI has made many traditional forms of assessment less convincing as evidence of student learning. Instead of focusing on how to stop students from using AI, educators are encouraged to reconsider what kinds of evidence they would now find persuasive. This approach calls for a shift in assessment design that reflects the widespread integration of AI into everyday tools and student workflows.
Read MoreA Student Manifesto for Assessment in the Age of AI
A group of students from the London School of Economics (LSE) created a manifesto calling for student-centered assessment practices that reflect how generative AI is shaping learning. They emphasize the importance of critical thinking, transparency, flexible assessment formats, and equitable access to AI tools. The manifesto urges universities to work in partnership with students to design policies and assessments that support meaningful learning rather than simply focusing on preventing misconduct.
Read More