Generative AI & Writing

The increasing use of Generative AI (GenAI) tools such as ChatGPT, Gemini and copilot has sparked significant discourse in higher education, particularly in courses where writing is a central component of student assessment. The capabilities of GenAI tools range from drafting essays to generating complex narratives and this raises profound questions about the potential benefits and risks of using GenAI. More especially, educators face new challenges in assessing originality, critical thinking and writing proficiency.

This literature review synthesizes current research on the impact of GenAI on writing-intensive courses, exploring how these technologies are reshaping pedagogical approaches and assessment strategies.

Scope of Generative AI in Higher Education and Student Impact

GenAI tools are increasingly used by both students and educators. These tools are used for drafting essays, editing, brainstorming, generating images, providing feedback and for grading. Crompton and Burke (2023) note that there has been a significant rise in publications and interest in using AI for higher education purposes since 2016, and the most frequent application of AI in higher education is to support learners, including providing personalized learning experiences, offering feedback on assignments, and helping students with self-directed learning. Among educators, AI systems help streamline administrative tasks, such as admissions, scheduling, and grading, thus allowing instructors to focus more on teaching and research (Crompton and Burke 2023). AI chatbots and virtual assistants are increasingly employed to provide 24/7 support for student queries and mental health services. While earlier research criticized the limited attention given to authors from education departments, Crompton and Burke’s (2023) systematic review revealed that most authors of publications on GenAI were indeed from education departments (28%), with computer science following at 20%. Additionally, the majority of participants in these studies were undergraduate students, accounting for 99 out of 138 (72%), compared to just 12 out of 138 (9%) being graduate students. 

Generative AI’s Potential Disruption of the Learning Process

Generative AI can potentially transform traditional academic writing in universities, thus threatening our ability to communicate our knowledge to relevant audiences (University of Michigan 2023). GenAI can affect students’ ability to assign meaning-making and authentic learning tasks, which students often require to make their own sense of new information when writing. Since writing requires evaluation, source verification, fact-checking, critical thinking, and meaning-making activities, GenAI can disrupt these learning processes for students (University of Michigan 2023).

To determine whether ChatGPT can successfully complete graduate-level instructional design assignments and the potential implications of AI in student written assessments, Parsons and Curry (2024) found that ChatGPT demonstrated the ability to generate written responses that were generally accurate and relevant. It performed well on assignments that required summarization, basic analysis, and straightforward application of concepts. However, it was particularly limited in performing tasks that required deep critical thinking, original insights, and complex problem-solving. The AI sometimes produced content that was superficial or lacked depth. This suggests that while AI can handle certain task, it is not a suitable replacement for human thought, especially in areas requiring nuanced analysis, creativity and sophisticated reasoning. This underscores the ability of GenAI to obstruct critical thinking and learning for students. According to Dwivedi et al. (2023) GenAI potential content’s inaccuracy, weakness in logical flow, issues of factuality, lack of critical elaboration, and non-originality could be some of the disruptive outcomes of a technology that is still being tested and developing (Dwivedi et al. 2023).

AI Generated Feedback and Grading

Given the critical role that writing plays in learning and assessments within higher education, it is of growing importance for instructors to make thoughtful and informed decisions about how and in what capacity generative AI tools should be used to develop students’ writing skills (Escalante, Pack, and Barrett 2023). In a longitudinal study that sought to examine the preferences and perceptions of English as a New Language (ENL) students towards AI-generated feedback compared to traditional teacher feedback, Escalante, Pack, and Barrett (2023) found that AI feedback was effective for surface-level corrections (grammar, spelling, and punctuation). However, teacher feedback was more comprehensive, addressing deeper issues such as content, coherence, and style. Students benefitted the most when they used AI feedback together with teacher feedback. While the study found that AI-generated feedback did not lead to superior linguistic progress among ENL students compared to human tutor feedback, Escalante, Pack, and Barrett (2023), nonetheless highlighted the potential time-saving benefits offered by AI-generated feedback for educators. The authors note that the time efficiency of AI-generated feedback can be particularly advantageous in large classes where providing individualized feedback by the instructor is logistically challenging and time-consuming. As such, using AI-generated feedback can significantly “reduce the time teachers spend on reviewing and responding to each student’s assignment, thereby freeing up valuable time for other tasks” (Escalante, Pack, and Barrett 2023). 

In a related study, Steiss et al. (2024) compared the quality of feedback provided by human instructors and ChatGPT on student essays and found that human instructors consistently provided more effective feedback than ChatGPT in all areas. Human feedback tended to be more specific and tailored to the student’s needs, often including nuanced suggestions that accounted for context and the student’s writing history. ChatGPT’s feedback was more consistent and could be generated quickly, providing detailed and voluminous comments. However, it sometimes lacked the depth and personal touch that human feedback offered. Interestingly, both human and AI feedback quality varied depending on the strength of the student’s essay. Language proficiency (native or non-native) did not seem to affect the quality of feedback from either humans or AI. 

Alternatively, Dai et al. (2023) argued that AI-generated feedback was more readable and detailed than feedback from an instructor and that ChatGPT achieved high agreement with the instructor when assessing students’ performance. More importantly, they also found that ChatGPT could generate a considerable amount of process-focused feedback, which is regarded as more effective than task-focused feedback in shaping students’ task strategy (Dai et al. 2023). This captures the promising values of ChatGPT in guiding students toward improving their tasks or even developing learning skills. 

Finally, Tossell et al. (2024) argue that ChatGPT didn’t simplify writing, but rather transformed the learning process of students in unexpected ways. For example, participants found the tool valuable for learning, and their comfort with its ethical use increased after using it. Interestingly, students preferred instructors to use ChatGPT as a grading assistant, with proper supervision, rather than having it grade independently. Their initial perception of ChatGPT as a “cheating tool” evolved towards seeing it as a collaborative resource that requires human oversight and careful trust calibration.

Opportunities and Potentials

Writing Support for Students and Educators

Generative AI can help summarize academic papers, saving both educators and students time and enabling them to cover many publications in a limited time. Generative AI can also help you summarize literature around certain research questions by searching many papers. For educators, GenAI tools can assist in developing educational materials like quizzes, lesson plans, and summaries (Glaser 2023). 

GenAI tools offer a valuable starting point and aid in brainstorming, literary search, concise summaries of relevant materials, knowledge acquisition, and writing support to students.  (Yusuf, Pervin, and Román-González 2024) 

Academic Publishing and Data Analysis

Tools like ChatGPT, copilot and Gemini can act as data analysts, autonomously handling datasets, developing analytical strategies, cleaning data, running tests, and interpreting results. It can also conduct visualizations, descriptive analyses, and regression analyses and even write sections of academic papers based on findings, showcasing its potential to revolutionize academic publishing and data analysis (University of Michigan 2023). ChatGPT has the potential to enhance the productivity of knowledge work through various mechanisms, such as “simplifying the information search process, but I predict that its most significant impact will be to provide a competent first draft for our most common written knowledge tasks” (Dwivedi et al. 2023) 

Supporting Instructional Design and Improving Learning Outcomes

GenAI can also support instructional design efforts by providing feedback on the effectiveness of different teaching strategies and materials. Educators can use this feedback to optimize their teaching methods and improve learning outcomes. For instance, ChatGPT can assist in writing curriculum by providing subject-specific knowledge, generating content ideas, offering feedback, and creating assessments (Glaser 2023). 

Challenges and Risks

AI Hallucination and Misinformation 

A primary concern with using GenAI tools such as ChatGPT for wiring is the occasional production of inaccurate outputs (Glaser 2023; Kasneci et al. 2023). AI systems that generate text and content can be unreliable. They may create statements that are wrong or lack evidence. Even worse, malicious actors can trick these systems into producing fake information. This spread of misinformation can erode trust, damage reputations, and even be used to sway public opinion (University of Michigan 2023). For example, ChatGPT often produces misinformation or untrue information, and this tendency to generate false facts is part of a problem known as AI hallucination. 

Accessibility, Biases and Ethical Challenges 

Generative AI models learn from vast amounts of data, but this data can contain hidden biases. These biases can then be unknowingly incorporated into the AI’s outputs, reinforcing unfair stereotypes and prejudices already present in society (Glaser 2023; University of Michigan 2023). For example, ChatGPT doesn’t have a built-in moral compass. It can’t tell the difference between right and wrong or fact and fiction. It simply learns from the vast amount of information it processes online, which can include biases (UNESCO 2023, 11). AI systems designed for everyone often miss the mark when understanding different cultures. This “one-size-fits-all” approach ignores the unique customs and traditions of various communities, leading to outputs that overlook or misrepresent minority or underrepresented groups (University of Michigan 2023). 

Furthermore, the introduction of AI in higher education could deepen existing inequalities if not carefully managed, as resource-limited institutions may face challenges in adopting these technologies (Crompton and Burke 2023). The accessibility of GenAI, particularly tools like ChatGPT, is also limited due to the need for a paid subscription and adherence to usage restrictions, raising broader concerns about equity and access (Glaser 2023, 1950). 

Policy gaps in Educational Institutions  

Many educational institutions lack integrated institutional guidelines and policies to regulate the use of AI tools like ChatGPT in classrooms. Ghimire and Edwards (2024) research highlighted a significant gap in policy – many schools lack specific guidelines for the ethical use of these AI technologies. Also, the authors found that high schools are generally less likely to have such policies compared to universities. Ghimire and Edwards (2024) found that “over 80% of higher education institutions reported active policy development, 5% already have a policy, and 15% have no plans to enact one. In contrast, only 50% of high schools are in the process of policy formulation, while approximately 45% neither have a policy nor plans to develop one”. Even when such policies exist at universities, they often neglect crucial areas like student data privacy and how these AI algorithms work (Ghimire and Edwards 2024).

 A study examining the necessity of policy regulation on using GenAI found that a larger proportion of respondents indicated that GenAI should be “restricted to self-learning.”. A considerable number indicated “prevention of use on assignment and research”, while a more conservative group indicated a “total ban” of the tool in the academic environment. While these submissions were idiosyncratic to each respondent, more than two-thirds reiterated in their response that “strict penalty” should be enforced when used for cheating and plagiarism. 

Students Perceptions on the Use of GenAI 

Recent research highlights varied student perceptions regarding the use of Generative AI (GenAI) tools like ChatGPT in higher education. Tossell et al. (2024) investigated college students’ views on using ChatGPT for writing assignments. Initially perceived as a potential tool for cheating, students eventually saw it as a beneficial aid when used alongside human oversight. They appreciated ChatGPT’s assistance in the writing process but raised concerns about the reliability of its output and the lack of detailed feedback. While students were comfortable with instructors using ChatGPT for grading assistance, they were less supportive of its use for independent grading. Overall, students’ perspectives on ChatGPT reflect a growing trust in AI, tempered by a call for responsible use and continued human oversight (Tossell et al. 2024). 

Strzelecki (2024) further explored factors influencing students’ acceptance of ChatGPT, utilizing the unified theory of acceptance and use of technology (UTAUT2). This study identified key predictors of students’ intention to use ChatGPT, including their habitual use of technology, belief in the tool’s usefulness (performance expectancy), and enjoyment derived from using it (hedonic motivation). The findings underscore that students’ acceptance of ChatGPT is strongly influenced by their familiarity with technology and perceived benefits, indicating a positive outlook toward integrating such tools into their educational practices (Strzelecki 2024). 

Chan and Hu (2023) examined student familiarity with and openness to using GenAI tools in learning contexts. While students acknowledged the potential benefits of GenAI, such as personalized learning support and assistance with writing and research, they voiced concerns about its accuracy, privacy risks, ethical implications, and potential impacts on personal development and career prospects. These concerns highlight a cautious approach towards the integration of GenAI in education, emphasizing the need for addressing accuracy and ethical considerations while leveraging its advantages (Chan and Hu 2023). 

Approaches to (Re)Designing Assignments in the Age of AI

The increasing use of GenAI in higher education has led to a growing need to rethink the learning objectives and assessments for many university courses. The following outline various approaches and strategies of evaluation redesign that adapt to the reality of AI use in higher education 


AI-Immune Assessments: These are in-class assessments where Students explain or demonstrate their reasoning and knowledge (Trust 2023; University of Massachusetts Amherst Center for Teaching and Learning 2024; Weissman 2023). 

  • Oral exams and Viva Voce  
  • Oral presentations, live debates, and role-playing 
  • In-class quizzes and test  
  • In-class peer review and evaluation 
  • Class discussion synthesis  

Process-oriented approach to assessing students: This approach shifts the emphasis from determining the final product to evaluating the process (Mulder, Baik, and Ryan 2023). 

  • Metacognitive exercises – assess their critical reflection and metacognitive skills and ask students to self-reflect on their learning outcomes. 
  • Reflective journals  
  • Process notebooks – students keep notes and document the steps they are taking and what they have learned. 

Incorporate the use of AI tools into the assignments: In these approaches to redesigning assessments, students are encouraged to use an AI tool to complete the assignments (UBC Centre for Teaching Learning and Technology 2023; Mulder, Baik, and Ryan 2023). Examples of these assessments include the following: AI critique assignment (Judge the AI output); AI art critique assignment; Document AI collaboration (Chart your AI journey); AI-assisted thematic analysis; AI-assisted synthesis analysis; Using AI to suggest a structure or outline for the paper; Using AI to brainstorm ideas and create media and infographics; Use AI to provide feedback on their work; Data  visualization project using AI; and Ethical implications Debate on using AI 


Project-based Learning: These assessments are targeted at encouraging collaborative learning and may, therefore, discourage individual cheating using AI (Meakin 2024; Mulder, Baik, and Ryan 2023). Examples of project-based learning include the following: experiential learning activities, group projects, situation analysis and group presentations. 


The Two-Lane approach: This is a blend of in-class contemporaneous assessments with human-AI collaborative assessments (Liu and Bridgeman 2023; University of Massachusetts Amherst Center for Teaching and Learning 2024). 

  • Lane 1: Design a task that cannot be completed using an AI tool (oral assessment and live simulations in class). 
  • Lane 2: Design the second task to involve AI collaboration, where students can leverage AI to enrich their learning experience and understanding. 

Personalized, contextualized assessments (Authentic assessments): These assignments require students to reflect on their individual experiences and perspectives or to provide responses based on specific disciplinary knowledge applied to real or hypothetical case studies (Mulder, Baik, and Ryan 2023; UBC Centre for Teaching Learning and Technology 2023). Examples of such authentic learning activities include case study assignments and situational analyses. These tasks demand that students engage with specific class discussions or draw from personal experiences. According to Netto (2023), case study assessments are currently less likely to produce high-quality responses from AI tools like ChatGPT. Consequently, case studies continue to be valuable in the social sciences as they emphasize human insight and offer a thorough understanding that can help prevent undue reliance on or manipulation by GenAI tools. 

References

Crompton, Helen, and Diane Burke. 2023. “Artificial Intelligence in Higher Education: The State of the Field.” International Journal of Educational Technology in Higher Education 20 (1): 22. https://doi.org/10.1186/s41239-023-00392-8. 

Dai, Wei, Jionghao Lin, Flora Jin, Tongguang Li, Yi-Shan Tsai, Dragan Gasˇevic, and Guanliang Chen. 2023. “Can Large Language Models Provide Feedback to Students? A Case Study on ChatGPT.” 

Dwivedi, Yogesh K., Nir Kshetri, Laurie Hughes, Emma Louise Slade, Anand Jeyaraj, Arpan Kumar Kar, Abdullah M. Baabdullah, et al. 2023. “Opinion Paper: ‘So What If ChatGPT Wrote It?’ Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative Conversational AI for Research, Practice and Policy.” International Journal of Information Management 71 (August):102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642. 

Escalante, Juan, Austin Pack, and Alex Barrett. 2023. “AI-Generated Feedback on Writing: Insights into Efficacy and ENL Student Preference.” International Journal of Educational Technology in Higher Education 20 (1): 57. https://doi.org/10.1186/s41239-023-00425-2. 

Ghimire, Aashish, and John Edwards. 2024. “From Guidelines to Governance: A Study of AI Policies in Education.” arXiv. http://arxiv.org/abs/2403.15601. 

Glaser, Noah. 2023. “Exploring the Potential of ChatGPT as an Educational Technology: An Emerging Technology Report.” Technology, Knowledge and Learning 28 (4): 1945–52. https://doi.org/10.1007/s10758-023-09684-4. 

Kasneci, Enkelejda, Kathrin Sessler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, et al. 2023. “ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education.” Learning and Individual Differences 103 (April):102274. https://doi.org/10.1016/j.lindif.2023.102274. 

Liu, Danny, and Adam Bridgeman. 2023. “What to Do about Assessments If We Can’t out-Design or out-Run AI? – Teaching@Sydney.” July 12, 2023. https://educational-innovation.sydney.edu.au/teaching@sydney/what-to-do-about-assessments-if-we-cant-out-design-or-out-run-ai/. 

Meakin, Lynsey. 2024. “AI and Assessment: Rethinking Assessment Strategies and Supporting Students in Appropriate Use of AI.” My College. 2024. https://my.chartered.college/impact_article/ai-and-assessment-rethinking-assessment-strategies-and-supporting-students-in-appropriate-use-of-ai/. 

Mulder, Raoul, Chi Baik, and Tracii Ryan. 2023. “RETHINKING ASSESSMENT IN RESPONSE TO AI.” Melbourne Centre for the Study of Higher Education. 

Parsons, Bruce, and John H. Curry. 2024. “Can ChatGPT Pass Graduate-Level Instructional Design Assignments? Potential Implications of Artificial Intelligence in Education and a Call to Action.” TechTrends 68 (1): 67–78. https://doi.org/10.1007/s11528-023-00912-3. 

Steiss, Jacob, Tamara Tate, Steve Graham, Jazmin Cruz, Michael Hebert, Jiali Wang, Youngsun Moon, Waverly Tseng, Mark Warschauer, and Carol Booth Olson. 2024. “Comparing the Quality of Human and ChatGPT Feedback of Students’ Writing.” Learning and Instruction 91 (June):101894. https://doi.org/10.1016/j.learninstruc.2024.101894. 

Tossell, Chad C., Nathan L. Tenhundfeld, Ali Momen, Katrina Cooley, and Ewart J. De Visser. 2024. “Student Perceptions of ChatGPT Use in a College Essay Assignment: Implications for Learning, Grading, and Trust in Artificial Intelligence.” IEEE Transactions on Learning Technologies, 1–15. https://doi.org/10.1109/TLT.2024.3355015. 

Trust, Torrey. 2023. “Essential Considerations for Addressing the Possibility of AI-Driven Cheating, Part 1.” Faculty Focus | Higher Ed Teaching & Learning (blog). August 2, 2023. https://www.facultyfocus.com/articles/teaching-with-technology-articles/essential-considerations-for-addressing-the-possibility-of-ai-driven-cheating-part-1/. 

UBC Centre for Teaching Learning and Technology. 2023. “Assignment and Assessment Design Using Generative AI.” AI In Teaching and Learning. August 2, 2023. https://ai.ctlt.ubc.ca/assignment-and-assessment-design-using-generative-ai/. 

UNESCO. 2023. “ChatGPT, Artificial Intelligence and Higher Education: What Do Higher Education Institutions Need to Know? – UNESCO-IESALC.” 2023. https://www.iesalc.unesco.org/en/2023/04/14/chatgpt-and-artificial-intelligence-in-higher-education-quick-start-guide-and-interactive-seminar/. 

University of Massachusetts Amherst Center for Teaching and Learning. 2024. “How Do I (Re)Design Assignments and Assessments in an AI-Impacted World?: Center for Teaching & Learning : UMass Amherst.” 2024. https://www.umass.edu/ctl/how-do-i-redesign-assignments-and-assessments-ai-impacted-world. 

University of Michigan. 2023. “U-M GenAI Committee Report.Pdf.” Google Docs. 2023. https://drive.google.com/file/d/101zhMpzr67SRePbbxfHc87j-5mSlkuOL/view?usp=drive_link&usp=embed_facebook. 

Weissman, Jeremy. 2023. “ChatGPT Is a Plague Upon Education.” Inside Higher Ed. 2023. https://www.insidehighered.com/views/2023/02/09/chatgpt-plague-upon-education-opinion. 

Yusuf, Abdullahi, Nasrin Pervin, and Marcos Román-González. 2024. “Generative AI and the Future of Higher Education: A Threat to Academic Integrity or Reformation? Evidence from Multicultural Perspectives.” International Journal of Educational Technology in Higher Education 21 (1): 21. https://doi.org/10.1186/s41239-024-00453-6.