
ISSN: 2641-1768
Hamid A Rafizadeh1* and Elaine V Zamonski2
Received: November 29, 2024; Published: January 03, 2025
Corresponding author: Hamid A Rafizadeh, 320 Northview Road, Oakwood, OH 45419, USA.
DOI: 10.32474/SJPBS.2024.08.000291
The rapid integration of AI tools like ChatGPT in education has redefined the learning landscape, promising benefits but also posing significant challenges. Through analyzing actual classroom dynamics within engineering education, this paper examines the nature of AI’s role, particularly concerning critical thinking and academic integrity. In courses such as Engineering Organizational Development (EOD) and Graduate Academic Research (GAR), the pervasive availability of ChatGPT has led some students to rely heavily on AI-generated content, disengaging from core learning processes. Despite course redesign efforts to curb plagiarism and foster meaningful engagement, many students remain dependent on AI, revealing a potential new “addiction” to AI-generated responses. This trend underscores the need for educational institutions to develop balanced policies and operational methodologies that incorporate AI as a complementary tool rather than a replacement for essential academic skills. Ultimately, this study highlights the complexities of AI’s integration in education and the importance of a foundational understanding of theory and practice when teaching students to value the learning journey as they navigate an increasingly AI-driven academic environment.
Keywords: AI Tools; ChatGPT; Engineering Education; AI Plagiarism; International Students; Course Redesign
The COVID-19 pandemic catapulted higher education into a revolutionary era, forcing a rapid transition from traditional in-person teaching to online learning [1]. This shift exposed students to new technologies that fundamentally transformed their educational experiences [2]. However, the higher education system, unprepared for such a dramatic change, struggled to address conflicting perceptions of its value. On one side, higher education was seen as a pathway to employability and social mobility, while on the other, it was envisioned as a means to cultivate critical thinking and enhance individual knowledge-seeking and processing capabilities [3]. Academic misconduct, including plagiarism, has long been a persistent challenge in higher education. It is widely understood that plagiarism impedes the development of critical thinking and analytical skills. This issue becomes more concerning with the rise of “contract cheating,” where students outsource assignments to third parties [4]. Will technological advancements continue to exacerbate academic dishonesty in higher education? Characterizing academic dishonesty as integrity violations offers a formal lens, but the effectiveness of this approach remains questionable [5,6]. Faculty often face a moral duty to uphold academic integrity, but this ideal frequently falters in practice, with cheaters receiving lenient penalties and teachers avoiding reporting violations to sidestep administrative burdens [7]. Despite integritypromotion strategies and insights from research on cheating, the problem persists [5,8,9].
We agree with Anson [10] in recognizing that any discussion of student behavior labeled as plagiarism is inherently shaped by the ever-evolving social construction of such terms. Factors such as the modes of student misconduct, the organizational and technological frameworks for detecting and addressing academic dishonesty and varying perceptions of its prevalence and severity contribute to a complex and multifaceted phenomenon [11-14]. In our view, addressing this complexity necessitates a deeper examination of both students and teachers within authentic classroom environments. Such observations highlight the reality that students are highly adaptive in responding to faculty efforts to counteract cheating, often driven by feelings of pressure and a desire for relief, as noted by Jenkins et al. [13]. This paper, therefore, investigates the development and application of ChatGPT in classrooms primarily composed of international students. The aim is to generate insights and data that can inform theoretical analyses and contribute to deeper understanding of student behavior in AI-driven academic settings. Artificial intelligence tools like ChatGPT are reshaping how students engage with learning and how educators evaluate academic progress. While AI provides opportunities for personalized learning and efficient content creation, it also introduces challenges to academic integrity and critical thinking, particularly among vulnerable groups such as international students. For international engineering students, plagiarism often occurs within an educational environment that prioritizes the development of critical thinking skills over strict plagiarism remediation or punishment [15]. These students, navigating unfamiliar cultural and academic norms, may find ChatGPT an appealing shortcut, opting for quick, AI-generated responses at the expense of essential learning processes. In teaching courses such as Engineering Organizational Development (EOD) and Graduate Academic Research (GAR), we have observed a troubling trend: an increasing reliance on AI-generated content leading to disengagement from foundational skills like critical analysis and reflective writing. This paper explores the dual-edged impact of ChatGPT on education through actual studies of classroom dynamics, examining its benefits and drawbacks in academic contexts. By focusing on the experiences of international graduate students in engineering courses, we highlight the need for adaptive strategies that harness AI’s potential while safeguarding the core competencies critical for long-term success in both academia and the workforce.
Empirical studies on academic cheating have predominantly focused on the U.S., but similar behaviors are evident globally. In European universities, for example, cheating often occurs under specific conditions: when benefits outweigh risks, when the environment is conducive to copying, when sanctions are perceived as minimal, or in the absence of an institutional code of [16,17]. Against this backdrop, the evolution of education has been shaped by technological advancements. From mastering foundational tools like Microsoft Office to tackling the complexities of programming languages such as Python and MATLAB, educational institutions have continually adapted to integrate new problem-solving technologies. The emergence of AI tools like ChatGPT represents the latest stage in this progression. Historical patterns suggest that, like previous innovations, these tools will ultimately be assimilated and regulated effectively within academic contexts. Menekse [18] envisions ChatGPT as a transformative tool in the classroom, providing both students and professors with access to vast, organized information akin to the Internet. Yet, ChatGPT and Internet are distinct in their interaction processes. While the Internet necessitates reading, thinking, and analysis to transform its content into academic work, ChatGPT delivers output that already resembles completed assignments. This feature appeals to students seeking to streamline their learning process but raises concerns about bypassing critical thinking and analytical skills essential for academic and personal growth.
Optimists believe ChatGPT can significantly enhance instructional resources, aid in the development of new techenhanced learning environments, reduce instructor workloads, and provide students with more opportunities to design and develop their learning experiences. This vision may align with future classroom realities, where AI tools could facilitate personalized learning and adaptive teaching strategies. However, the immediate challenge during this transitional period lies in managing ChatGPT-assisted behavior to ensure it complements rather than undermines traditional educational practices. Futuristic views on AI in education present a spectrum of possibilities. Supiano [19] highlights ChatGPT’s potential to offer students more personalized support, allowing them to take risks, tackle ambitious projects, and learn according to their preferences and pace. ChatGPT’s constant availability and tailored assistance could foster a more engaging and student-centered learning environment. On the flip side, some argue that traditional exams and tightly controlled learning environments can effectively neutralize ChatGPT’s influence in the transition period, buying time for constructive integration into classroom. Another benefit of these exams is to test student understanding and application of knowledge without the aid of AI, ensuring academic integrity. French et al. [20] see all such positions as false and assert that there is no evidence for academic benefits or pedagogical merit of such endeavours.
Regardless of the disagreements on specific issues, generalizations about AI’s impact on education are valuable. Equally valuable are insights from specific disciplines, especially on vulnerable groups. For instance, student capability differentials are significant in determining how well individuals adapt to AI tools and fit into future job markets or graduate programs. Students with higher capabilities often have better preparation and test-taking strategies, enabling them to perform well under the pressure of multiple courses. In contrast, less prepared students might rely heavily on AI tools like ChatGPT, potentially exacerbating existing educational inequalities. During this transitional period, adopting a student-centered approach may offer a more effective strategy for dealing with the complexities introduced by ChatGPT. In engineering, students often struggle with writing skills [21,22]. From that perspective, ChatGPT could be used to improve writing proficiency and alleviate writing anxiety. This application aligns with the broader educational goal of developing well-rounded individuals equipped with diverse skills. However, from our perspective of educating engineering students, a less generalized and more pragmatic view of ChatGPT is necessary to address its real-world impact. Observing both teachers and students in actual classroom settings provides the best laboratory for understanding the pragmatic effects of ChatGPT during this transition. Practical insights from these observations can inform policies and teaching strategies that maximize the benefits of AI while mitigating its potential drawbacks.
In our teaching experience, the international students are particularly vulnerable during technological transitions. Habib et al. [23] observe that disparities in economy, access, knowledge, and power create a “digital divide.” This divide often leaves international students lagging behind their native peers, affecting their academic performance and integration into new educational environments. In the graduate course Engineering Organizational Development (EOD) in fall 2023, consisting solely of international students, this issue was more pronounced with ChatGPT significantly altering the classroom dynamics. In the EOD class, about half relied heavily on ChatGPT, showing little interest in learning and applying the course models. Their completed assignments consisted predominantly of ChatGPT-generated content-a group of 6-7 students presenting nearly identical work. Despite receiving instructor feedback and poor grades, their response to the problem of identical work was merely to rearrange the order of identical ChatGPT-generated paragraphs, aiming to create an illusion of individual originality to deceive the instructor. The EOD course was designed to engage students with essential models of individual and organizational life through intriguing case studies and personalized projects. This approach aimed to foster a deep understanding of various aspects, including leadership, power, communication, groups and teams, conflict and negotiation, emotions and moods, organizational culture, motivation, personality, decision-making, and justice. For instance, the course explored multiple models of leadership, each offering a distinct perspective. For example, among the leadership models, the Trait Theory of Leadership highlights the significant leadership traits, the Behavioral Theory of Leadership posits that leaders can be trained regardless of their traits, and The Contingency Theory of Leadership suggests that regardless of traits and training, successful leadership arises from matching the leader with the followers. In case studies or projects, students were to decide which leadership model needs to be applied for a deeper understanding. These assignments, which were to encourage critical thinking and practical application, now faced the challenge of students bypassing the learning process altogether by relying on AI-generated content.
Case studies included topics like “Should women have more power?” and “You want me to do what!” (focusing on a manager asking an employee to perform a seemingly inappropriate task), “There is a drone in your soup,” “When the going gets boring,” “Cheating is a decision,” and “Laziness is contagious” which focused on real-world scenarios to keep students interested and curious. These cases aimed to enable students to apply the models to develop a deeper and multifaceted understanding of complex issues. However, the tightly knit group of 6-7 ChatGPT users showed no interest in engaging with these materials. Instead, they relied solely on AI-generated content, unaffected by instructor feedback and their failing grades. In addition to case studies, the course included two major projects designed to highlight the real-world application of skills learned. The first project required students to improve individual and organizational behavior in an organization they were familiar with, such as a workplace, family, sports team, or place of worship. The second project involved choosing an exceptional individual who had a significant impact on others’ behavior and organizational dynamics. Students selected notable figures like Jesus, Nelson Mandela, Malala Yousafzai, Deng Xiaoping, Henry Ford, and Bill Gates. In these projects students had full control over their knowledge seeking and processing. The projects allowed them to apply the models and analytical skills to situations of their personal interest.
Despite the course’s student-centered design, the ChatGPT users only saw value in generating and submitting AI-generated content. Their response to instructor feedback focused solely on refining their use of ChatGPT rather than engaging with the course material. This behavior highlighted a critical issue: some students viewed knowledge as a mere output of ChatGPT, disregarding the educational value of the course.
As the digital age advances, education faces unprecedented challenges, particularly with the integration of Artificial Intelligence (AI) tools like ChatGPT. Addressing issues such as plagiarism and contract cheating requires a multifaceted approach. One example is the Virtues Approach, which instills ethical behavior by teaching students not to cheat. Another is the Prevention Approach, which minimizes opportunities for academic dishonesty through thoughtful course and test design as well as delivery techniques. Finally, the Police Approach focuses on detecting, penalizing, and educating students who engage in misconduct [15,24,25]. Elements of these strategies are often integrated into classrooms and reflected in traditional exams, long regarded as the cornerstone of assessing knowledge and critical thinking. However, the rise of AI tools challenges the efficacy of these methods. How would students accustomed to relying on AI perform in a closed-book, closed-notes, closed-computer exam? This question was explored in the fall of 2023 in an Engineering Organizational Development (EOD) class, shedding light on the evolving dynamics of education in the AI era.
The class, composed of 20 international graduate engineering students, encountered its first “everything closed” exam, which included five essay questions on models learned during the course. Students were pre-informed about the exam structure, which would feature prompts like: “Describe a version of the leadership model” and “Compare three versions of the power model and offer reasons as to which one is better.” The results were alarming: out of 20 students, 12 failed, with the majority being heavy ChatGPT users. The instructor, seeking to understand the widespread failure, convened a meeting with the students. Most admitted their shortcomings, acknowledging, “I did not study well, and it is my fault,” and requested another chance to retake the exam, pledging to study this time. The instructor granted a retake. The exam solution had already been reviewed in class, and students were provided with the solution file. The instructor stressed the importance of studying the solution file, hinting that the retake would be similar. During this process, a student who had passed the original exam raised concerns, arguing that allowing only the failing students to retake the exam was unfair, predicting they might all get an A. Despite this, the retake proceeded, and the exam schedule gave the failing students a week to prepare. The outcome of the retake was unexpected. Despite the retake being nearly identical to the original exam, only two students improved their grades from F to A, while ten failed again. When questioned about their repeated failure, a student, later identified as the leader among the devout group of 6-7 ChatGPT users, revealed, “You did not say it would be the same exam. You said it would be slightly changed, so we didn’t study it.” This response only highlighted the significance of dependence on AI assistance. There was not only an over-reliance on outsourcing through ChatGPT but also a lack of desire or motivation to change their way of learning. They failed to develop the necessary skills to independently study and comprehend the course material. They struggled to grasp the reviewed exam content and solution file, which focused on describing and comparing models. Their proficiency was limited to generating content through ChatGPT. This showcased a stark contrast to generalizations that students using ChatGPT would effectively engage in a cycle of idea development, critical assessment, and refinement, ultimately producing work that meets both their and the instructor’s expectations [26].
General ChatGPT usage studies suggest that students with different growth mindsets achieve varying levels of learning and critical thinking when using AI tools [27]. This implies that the EOD ChatGPT-user group may have suffered from low growth mindsets, unable to utilize ChatGPT critically. The instructor, too, may have struggled with a low growth mindset regarding AI tools, making it difficult to guide students in the tool’s proper use. The question remains: why did students avoid engaging with learning the models taught in class? Assessing and analyzing any concept requires organizing thoughts and articulating them in a structured manner. McMurtrie [28] suggests that ChatGPT can support brainstorming, essay initiation, idea clarification, and draft refinement. However, a large group of EOD students failed to engage in any such behavior, and the instructor struggled to break their ChatGPT-ingrained habits. The traditional grading method became ineffective as these students seemed indifferent to the risk of failing the course, persisting in their reliance on ChatGPT despite the grade implications. In contrast, another course, Graduate Academic Research (GAR), adopted a notably different ChatGPT policy. The GAR allowed ChatGPT experimentation, using the tool as a research assistant throughout the process—from brainstorming to research to writing. Despite this innovative approach, a number of students still copied heavily from ChatGPT on their final research paper. For their final writing assignment, students were asked to write a personal reflection on their successes and challenges in learning to write, discuss, and present their research, as well as evaluating ChatGPT’s role (good, bad, neutral). In this process, students were instructed to ask ChatGPT for a basic outline for their reflection essay. While ChatGPT provided excellent outlines, most students chose to have ChatGPT write the entire reflection essay for them. This approach was completely counterintuitive to the assignment’s intent, which was to use and discuss individual experiences. The assignment was designed to outwit ChatGPT by requiring personal insights. It is perplexing that the graduate engineering students failed to grasp the concept of balanced and proper use of ChatGPT after completing an entire course designed to teach them how to effectively use and incorporate ChatGPT in the research process.
The EOD final exam was a significant test of the ChatGPT-user group’s behavior. The exam consisted of a single essay question plus a practical question that involved using Microsoft Excel and a simulation software. The essay required students to describe, compare, and contrast three motivation theories, which they had been previously informed could appear on the exam. The Excel and simulation question required analyzing data on “personal power.” Scheduling added a new layer of complexity. The university had scheduled another EOD class’s final exam on Monday with the ChatGPT-dependent EOD class’s exam on Friday, both taught by the same instructor. This raised concerns that the ChatGPT-user group might cheat by obtaining information on the essay question from the earlier exam. Also, since students in the Monday class were allowed to keep the Excel part of their exam file, it could potentially be shared with the ChatGPT-user group. The instructor considered changing the essay question and altering the Excel data but ultimately decided to keep the exam unchanged to observe the ChatGPTers’ behavior. Although there was no concrete proof of cheating, the final exam results revealed anomalies. In the written portion, the ChatGPT group performed exceptionally well—almost textbook perfect—in describing the three motivation theories but struggled with comparing and contrasting them. Additionally, they all failed the Excel part of the exam, despite the necessary information being covered in class materials and exercises. Overall, the final exam showed the ChatGPT-users improving from an average F to C.
By the semester’s end, four core members of the ChatGPT-user group, including their leader, failed the course, while others adapted and improved their performance. It is our belief that the presence of an academically weak group led by a ChatGPT-reliant leader significantly influenced this outcome. This contrasts with the EOD class where international students, making up 60% of the class, early in the course adjusted their approach to learning after initial attempts to use ChatGPT in their assignments. These observations underscore two critical aspects: the behavior of vulnerable students and the discipline-specific challenges associated with essay writing. While general perceptions of ChatGPT use may lean toward optimism, a closer examination of vulnerable student groups and underprepared instructors reveals overreliance on ChatGPT as a significant concern during this transitional period. As educational institutions work to integrate AI tools, the development of balanced policies and effective guidance is vital to fostering students’ independent critical thinking and study skills. However, this goal may remain unattainable if educators adopt weak approaches to virtue development, prevention, and enforcement [15,24,25]. Furthermore, excluding university administrations from understanding and addressing classroom realities exacerbates the challenge, leaving gaps in accountability and oversight [7].
In the evolving landscape of engineering education, graduate students are expected to master critical thinking and produce written material suitable for diverse audiences. However, the rise of AI tools like ChatGPT raises pressing questions: Could these tools undermine essential skills, or should they become integral to the educational process? The foundational structure of the managermanaged duality (MMD) offers a lens to examine this dilemma [44]. Historically, societies have relied on the MMD framework to organize human capabilities for producing and distributing goods and services. In this setup, a select few act as managers, directing the larger group known as the managed. The classroom mirrors this structure, where both teachers and students must decide whether ChatGPT should replace critical thinking and writing. Tension arises when teachers resist ChatGPT as a substitute, while students advocate for its use.
Proponents supporting students argue that just as engineering students leverage tools like MATLAB to streamline complex calculations, they should similarly utilize ChatGPT for writing tasks [29]. However, opponents contend that success with MATLAB requires a solid grasp of engineering theories and concepts, making it ineffective for students lacking foundational knowledge [30]. In contrast, ChatGPT can be employed in ways that do not necessitate deep subject understanding. While MATLAB demands a technically proficient user, ChatGPT can also function as an effective editing tool for students who have already mastered the theoretical and practical aspects of engineering, aiding in refining and enhancing their writing. This targeted use can be especially advantageous for vulnerable groups, such as international students, by supporting their academic communication skills without undermining their foundational learning.
Grobe [26] describes his process for producing a ChatGPTassisted assignment: “I had definite ideas and arguments I wanted to make. I fed those ideas and arguments into ChatGPT, assessed the output, and judged its initial responses too predictable and superficial.” This cycle is repeated as ChatGPT output is adjusted and corrected for shortcomings like lack of evidence and fed back into ChatGPT to get an improved response, thus using ChatGPT as a “writing assistant.” If students used ChatGPT similar to a grammar-checker-type tool, there would be no concerns over its use in any setting. Grobe concludes that “Far from replacing human intelligence, it will provide new starting points for some of the processes we routinely use to think.” Thus, when individuals struggle to form thoughts to write, they can use ChatGPT to generate a creditable first draft, which they can then revise into their own work. In our opinion, if this view were the reality, there would be no controversy over ChatGPT’s use.
However, classroom observations in the Engineering Organizational Development (EOD) and Graduate Academic Writing (GAR) courses revealed a different reality. A large number of international engineering students used ChatGPT as a shortcut to complete their assignments without effort to interpret or validate its output. These students submitted ChatGPT-generated content as their own work, foregoing the opportunity to refine or improve it. From the MMD perspective, the EOD instructor also contributed to the problem by insisting that students develop writing skills independently, without teaching them how to use ChatGPT constructively. This situation indicates a breakdown in the EOD and GAR classrooms at both the managerial and managed levels. In a broken MMD, attention often shifts to force-driven approaches that rely on police-catch-punish methodology [31,32]. Keegin [33] asserts, “make it very, very clear to your student body—perhaps via a firmly worded statement—that AI-generated work submitted as original effort will be punished to the fullest extent of what your institution allows.” While our focus has been on “vulnerable students,” it is important to note that there are also “vulnerable educational institutions.” Symptoms of vulnerability include large classrooms, overburdened teachers, and high populations of struggling students [18]. Under these circumstances, the use of AI tools may be tempting as stress-relievers. In a bid to curb the rising tide of ChatGPT-assisted plagiarism, the Graduate Academic Research (GAR) course embarked on an ambitious redesign. The initiative was grounded in Sowell’s [34] insightful research, which identified key factors driving plagiarism among international students: immense pressure to succeed, challenges in adapting to American academic culture, and unfamiliarity with rigorous academic research standards. The revamped GAR course aimed to realign its content with essential engineering skills, such as writing lab reports, creating graphic visuals from data, and conducting field-specific research. This strategic alignment sought to lower the incentive to plagiarize by making the coursework more relevant and engaging. The instructor monitored rates of missing and late assignments to gauge student engagement and improvement.
However, the student response to the redesign was unexpected. When asked to write about general attitudes toward plagiarism in their home countries, many students simply copied and pasted responses generated by ChatGPT. This behavior was perplexing, as these same students had demonstrated a clear understanding of Sowell’s plagiarism prevention strategies during small group discussions. Despite their ability to articulate these strategies verbally, they continued to rely on ChatGPT for written assignments where ChatGPT’s generic responses supplanted their own ideas from class discussions. To enhance the practical relevance of the GAR course, the final research assignment was designed as a mock version of the EOD course’s final research project. This connection aimed to motivate students to hone their research skills in a lowpressure environment, treating the GAR final assignment as a rehearsal for the more demanding ChatGPT-restricted work in the EOD course. The GAR redesign also made the final project a “group project,” promoting incremental, skills-based research. This collaborative approach was more supportive of students who were hesitant to voice their concerns individually. Within the group, they could communicate their concerns to the professor through peers, facilitating a more supportive learning environment in terms of feedback and guidance. Additionally, the group format addressed the concerns raised regarding varying growth mindsets among students when considering ChatGPT’s roles and functionalities [27].
Adopting a dynamic, interactive approach, the redesigned GAR course broke down the research process into manageable steps. Classroom discussions revealed that students favoured this method, requesting more support with finding and reading sources, note-taking, summarizing, paraphrasing, and properly citing to avoid accidental plagiarism. They essentially sought “scaffolding support” in developing their research skills. Despite the potential benefits of this approach, Robin’s [35] critique that such structured learning can become artificial, alienating, and even “infantilizing” is to be taken into consideration. Measuring the impact of the redesign on student performance, a comparison with a pre-redesign GAR class revealed noticeable improvements. The number of missing or late assignments, used as a proxy for performance, decreased noticeably, indicating better student engagement. The grade distribution shifted to higher grades, supporting the pedagogical move towards lab-like, step-by-step instructions. Despite the positive trends in performance and reduced plagiarism, the final project still exhibited significant instances of plagiarism. The redesign alone was insufficient to completely mitigate plagiarism in large assignments among graduate engineering international students. On the final reflection essay for the GAR course, where students were to discuss their experiences with research writing and ChatGPT, even when provided with a ChatGPT template for the essay, many reverted to submitting AI-generated, impersonal, and generic reflections. This persistent issue underscores the challenge of integrating ChatGPT into the academic process without compromising integrity. The introduction of ChatGPT has undoubtedly transformed the landscape of academic integrity. While Grobe’s [26] optimistic assessment of ChatGPT’s potential as a valuable educational tool is not without merit, the reality presents a more complex picture. Grobe suggests that ChatGPT can complement and enhance traditional methods of instruction rather than replace them. However, striking the right balance between leveraging AI tools and maintaining rigorous academic standards remains a formidable challenge during this transitional period for both instructors and students.
Allen and Kizilcec [36] offer a distinct perspective on the challenges educational systems face in managing AI tools like ChatGPT. They argue that these issues stem from a lack of theoretical grounding in cheating research, which has traditionally emphasized prevention and detection tools. This approach is further hampered by faculty reluctance to adopt such measures. Earlier studies employing motivational theories, such as Expectancy Theory, Goal Setting, and Reinforcement Theory, have provided trends and highlighted issues warranting further investigation but have failed to establish a robust framework for understanding faculty and student behavior in relation to academic misconduct [37]. Allen and Kizilcec’s framework for academic conduct is grounded in the concept of “degrees of cheating,” which forms the basis of a “spectrum of academic conduct” defined by varying levels of trust. This spectrum captures the range of student behaviors from cooperation (not cheating) to competition (cheating). In “masteryoriented cooperation,” students focus on developing competence by understanding and applying course content. In “performanceoriented cooperation,” students aim for good grades with minimal effort. “Fudged cooperation” refers to students who do not actively seek cheating opportunities but may still succumb to temptation when the chance arises. “Defensive competition” involves students using dishonest tactics, such as plagiarism or purchasing reports and exam solutions, to secure grades. Finally, “aggressive competition” describes students who engage in sabotage, undermining their peers to gain a competitive edge.
The spectrum of academic conduct, while an interesting categorization of student behaviors along the lines of cooperation (not cheating) and competition (cheating), does not offer a robust operational framework for understanding classroom dynamics. Similar categorization models have been proposed in the past, but they often lack a grounding structure for practical application. For instance, Cressey [38] introduced the “fraud triangle,” a three-word framework comprising pressure, opportunity, and rationalization. Building on this, Wolfe and Hermanson [39] added the term “incentive” to create the “diamond model of fraud.” Later, Kassem and Higson [40] argued for a more comprehensive “new fraud triangle model,” incorporating motivation, opportunity, integrity, and the fraudster’s capabilities. In their 2024 work, Allen and Kizilcec adapt Cressey’s fraud triangle by anchoring it to the concept of “trust,” which they define as the vulnerabilities shared by students and teachers engaged in the dynamic interplay of cooperation (not cheating) and competition (cheating). This adaptation forms their five-domain spectrum of trust-dependent teaching. Despite the unique terminologies and structures proposed in these models, they remain fundamentally “word suggestions” that reflect the subjective choices of their creators rather than offering a universally applicable grounding. As such, these frameworks are shaped more by the perspective of the model’s author than by any objective or fixed standard.
From this perspective of “word management,” McMurtrie [28] asserts that ChatGPT “promises to revolutionize how we write,” invoking grand images of transformation. However, revolutions inherently involve both creation and destruction. A historical parallel can be drawn to the French Revolution (1789–1799), which simultaneously dismantled the monarchy through violent upheavals like the Reign of Terror and laid the groundwork for enduring reforms such as the Napoleonic Code. Similarly, the introduction of ChatGPT into education embodies this duality. While it has the potential to erode academic integrity by enabling plagiarism, it also presents opportunities to enhance critical thinking and elevate the quality of student writing. Like all revolutions, its ultimate impact will depend on how educators and students navigate its disruptive and creative forces. The teaching Engineering Organizational Development (EOD), across twenty years, has revealed that engineering students often struggle with case study analysis using models of organizational behavior. These models, which encompass various dimensions of management and organizational behavior-including leadership, power, communication, groups and teams, conflict and negotiation, emotions and moods, organizational culture, decisionmaking, motivation, perception, and justice-do not come naturally to engineering minds. To address this, the initial weeks of the EOD course were designed to include “practice” case studies. These assignments, though thoroughly reviewed and graded, did not count towards the final grade. This method allowed students to understand the instructor’s expectations, identify key elements in a case, apply relevant models, and learn from their mistakes without the pressure of grade penalties. However, the fall 2023 EOD class presented a new challenge. In a class comprising of 20 international graduate engineering students, the majority turned to ChatGPT to complete their assignments, bypassing the crucial learning process. The instructor labeled this as cheating, emphasizing that while the repercussions might be mild at the practice stage, they could escalate to severe academic and disciplinary consequences in subsequent case studies. To underscore the value of genuine effort, the instructor showcased the grade distribution of another EOD class where students earnestly engaged with the models and analysis methods, thereby highlighting the better grades that reflected the benefits of hard work and dedication.
After class, a student, who later emerged as the leader of a group of ChatGPT users, approached the instructor. He claimed that international students in the other class were also using ChatGPT but in a more sophisticated and undetectable manner. The instructor, taken aback by the student’s candid admission and intrigued by his honesty, saw it as a learning moment. The student expressed interest in talking to a few high-performing students from the other class to understand their advanced ChatGPT usage. The instructor does not know if any such meeting occurred, but regardless, there was no change in the behavior or performance of the group of ChatGPT users. This experience prompted the EOD instructor to reflect on a personal shortcoming: the failure to learn from the students. The student’s interest in mastering ChatGPT for assignment completion highlighted a gap in teaching the proper use of AI tools. At that moment, the instructor was more focused on discouraging ChatGPT than on teaching how to use it effectively as a supplement to independent work. This situation revealed a dysfunctional classroom dynamic, where neither the instructor nor the students were adequately prepared to address the challenges and opportunities presented by ChatGPT.
For many students, ChatGPT presented an easy alternative to active participation. Unlike the rigorous process of learning and applying models, ChatGPT offered quick solutions with minimal effort. This behavior finds ironic support in some scholarly research. Berdanier and Alley [29], in their exploration of teaching engineers to write in the ChatGPT era, argue that teaching writing is akin to teaching thinking, regardless of AI tools. They believe that equipping engineering students with strong writing skills enhances their communication within and outside their discipline. However, this logic is challenged by the potential for engineers to become mere “prompt writers,” reliant on AI for completing assignments and professional tasks. This raises critical questions about the efficacy of this approach compared to traditional methods of teaching writing and critical thinking. Johri et al. [41] argue that AI tools are akin to other digital tools integral to engineering education. They advocate for the development of “prompt engineers,” emphasizing White et al.’s [42] catalogue of prompts which suggests better prompts lead to more precise and effective AI-driven outcomes. In the extreme scenario, engineers might only instruct AI to perform complex tasks, such as designing an airplane, potentially rendering traditional engineering skills obsolete. However, Johri et al. acknowledge the necessity of a “sensemaking component” to verify the validity and completeness of AI-generated tasks, emphasizing human oversight. This implies that prompt engineers, despite their limited knowledge beyond entering prompts into AI tools, should possess the ability to critically assess AI suggestions.
Johri et al. clearly recognize that generating a list of tasks via AI is not equivalent to validating and ensuring the completeness of those tasks. They stress the need for the “sensemaking component,” traditionally the domain of the knowledgeable engineer. However, they foresee that sensemaking could eventually be performed by AI tools themselves through user feedback and iterative improvement. During this transitional period, the behavior observed in the EOD classes highlights a significant issue: many students lack the knowledge and ability to verify the validity and completeness of ChatGPT’s output in engineering assignments. In the Graduate Academic Research (GAR) course, the focus shifted to addressing ChatGPT plagiarism and redesigning the course to minimize it. The redesigned course aimed to reduce plagiarism through a small group approach in a lab-classroom setting, dynamically adjusting to student capabilities. Unlike the EOD course that prohibited AI use, the GAR course treated AI as a beneficial tool for specific tasks like generating citations, which were previously managed by tools like Citation Machine—an online citation generator cluttered with distracting advertisements. ChatGPT provided faster and more accurate results without the distractions.
In practice, GAR’s ChatGPT utilization inadvertently led to the creation of “prompt engineers,” even as it focused on harnessing students’ critical thinking skills to teach them to ask the right questions. For instance, students learned to ask specific questions to get ChatGPT to vet articles and opposing viewpoints. This leads to a broader inquiry: What is critical thinking? Can it include the process of asking the right questions and using feedback to refine and re-ask? The course required students to use ChatGPT to find evidence for their viewpoints in various articles, which proved to be an effective practice for enhancing English vocabulary skills due to the nuances of English synonyms, especially in the context of nuanced arguments. The use of tools like ChatGPT raises questions about the fine line between learning and “intelligent copying.” For example, international students found ChatGPT useful for quickly translating text, helping them assess the usefulness of research articles in their native language. This prompts a debate on whether students should immerse themselves in the American academic environment in English or rely on AI tools for translations, potentially compromising their immersion in the English academic context. Using ChatGPT to summarize articles, simplify the summaries, or translate them into students’ native languages could help non-native students replicate the academic research process without the added burden of navigating English texts, possibly reducing frustration and plagiarism. However, whether a ChatGPTgenerated summary holds the same learning value as one created by a student after carefully reading an article remains debatable. The aim of a course should be to teach students to seek knowledge independently rather than rely on AI-generated summaries. However, one could argue that in the process of seeking knowledge, they are just using ChatGPT to refine the answers to their questions.
This perspective contrasts with Johri et al. [41], who regard AI tools as fundamentally similar to other digital tools in engineering education and advocate for teaching students how to craft better prompts to generate improved summaries. However, findings from our classroom-driven research challenge this view. Even after a full course focused on prompt-writing and the integration of ChatGPT into research and analysis, students consistently relied on ChatGPT to complete larger writing assignments—even after organizing their research notes into a coherent argument outline. This highlights a deeper issue: How can instructors persuade students of the intrinsic value of developing research and writing skills? This challenge reflects the broader tension in higher education: whether students perceive learning as a transformational engagement with knowledge or merely as an economic transaction, where the value of education is measured in terms of its monetary return [3, 31, 32]. Given that ChatGPT presents both challenges and opportunities in education, our classroom observations emphasize the need to teach students to use it effectively as a tool for enhancing their learning and critical thinking skills, rather than as a shortcut to avoid genuine effort. The task for educators now is not only to balance these new technologies with the enduring principles of thorough, independent learning, but also to convince students of the value of learning itself.
The rapid integration of AI tools like ChatGPT into academia is transforming education, revealing both opportunities and challenges in creating AI-enhanced learning experiences. This study explores the complex interplay between AI tools and students’ academic development, particularly among vulnerable international students in engineering fields. Here, reliance on ChatGPT often bypasses critical thinking, writing skills, and problem-solving skill development. Despite efforts to redesign courses to reduce AI dependency and encourage active engagement, student behavior demonstrates that these tools can both support and undermine academic growth, depending on how they are employed. A parallel can be drawn with academic ghost-writing services, which legitimize their practices through strategies such as denying responsibility, denying harm, blaming the victim, condemning critics, and appealing to higher loyalties [43]. Similarly, AI tools like ChatGPT may inadvertently expand access to undetected plagiarism. The prevalence and success of ghost-writing services can function as a measuring rod that reflects underlying teacher failures in classroom and assessment strategies, underscoring the need for deeper pedagogical reflection.
High-stakes projects and examinations, prevalent in current academic structures, offer another lens for examining student misconduct. French et al. [20] argue that these high-pressure assessments often lack a strong evidence base for their academic benefits. Instead, their continued use is rooted more in tradition and organizational inertia than in scientific or pedagogical merit. This misalignment may further exacerbate academic dishonesty and calls for a reassessment of assessment methodologies. As education enters this transitional era, institutions must balance leveraging AI’s potential with safeguarding the core values of independent learning and critical inquiry. Policies should evolve to ensure AI tools complement rather than replace essential competencies. However, skepticism exists about whether faculty will consistently enforce even well-designed policies [18, 32]. By addressing the diverse needs of student populations and tailoring educational strategies to align with these challenges, institutions can harness the dual-edged nature of AI. The ultimate goal should be to integrate AI tools seamlessly into education, fostering not only academic achievement but also lifelong intellectual growth and adaptability in an increasingly digital world.
No conflict of interest.
None.
Bio chemistry
University of Texas Medical Branch, USADepartment of Criminal Justice
Liberty University, USADepartment of Psychiatry
University of Kentucky, USADepartment of Medicine
Gally International Biomedical Research & Consulting LLC, USADepartment of Urbanisation and Agricultural
Montreal university, USAOral & Maxillofacial Pathology
New York University, USAGastroenterology and Hepatology
University of Alabama, UKDepartment of Medicine
Universities of Bradford, UKOncology
Circulogene Theranostics, EnglandRadiation Chemistry
National University of Mexico, USAAnalytical Chemistry
Wentworth Institute of Technology, USAMinimally Invasive Surgery
Mercer University school of Medicine, USAPediatric Dentistry
University of Athens , GreeceThe annual scholar awards from Lupine Publishers honor a selected number Read More...