Perfect Scores, Imperfect Learning
AI, Meritocracy, and Cheating Through the Lens of Three Educational Systems
“The Cubies’ ABC,” Earl Harvey Lyall, 1913. Source: Public Domain Image Archive/The Getty.
I. Don’t Panic About AI – Panic About the State of Our Society.
The past few weeks have seen significant discussions on how AI will affect the humanities, education, and critical thinking. Most notably, the NYMag’s Intelligencer recently featured an article that all but confirmed everyone’s worst fears about AI in education. Both online and offline, the consensus seems to be that AI is making people stupid, uncritical, and lazy, making us lose our sense of efficacy.
But why are these students cheating in the first place? Would they be learning if they completed assignments as designed? Were students already learning in the first place?
I went to college over a decade ago, but many insights in these articles echo trends I observed as a student and later as an educator.
Because of this, I want to complicate the narrative. And there are likely more arguments out there, but I haven’t seen them break into mainstream discourse. I have a few essays in mind for this, but for this first one, I want to focus on the U.S. higher education system, the institutionalized routes to success, and other models that can serve as ways to teach people how to write and think critically.
AI will indeed harm education and critical thinking, but primarily due to existing structural problems, not because AI is inherently educational poison. In the same way that social media amplifies existing social problems rather than creating entirely new ones, AI amplifies educational dysfunction that was already there. A system that rewards surface-level compliance over deep understanding creates the perfect environment for AI shortcuts to flourish.
First, a bit of background to get a sense of where I’m coming from. I grew up abroad, attending French international schools — not the American education system – until college, where I went to a tiny liberal arts college most people haven’t heard of but that I adored. I then taught in China for a few years, first as an educational coach helping Chinese students apply to universities abroad, and then as a current affairs lecturer at the university level.
These structural issues aren’t inevitable. We can create educational environments that foster genuine learning, even in an AI world. But this requires acknowledging that the problem isn’t (just) AI, it’s how we’ve designed education to incentivize performance over understanding. The solution isn’t better AI detectors - it’s better learning designs.
To understand why AI feels so threatening, we need to examine the broader dysfunction in our educational systems.
Crop from “School and family charts, accompanied by a manual of object lessons and elementary instruction. No. XIV. The Chromatic scale of colors,” Marcius Willson and N.A. Calkins, 1890. Source: Library of Congress.
II. The Broken Faux-Meritocracy of Grades
1. The French Education System: When Perfection is Impossible
The French grading system differs significantly from those in China and the U.S. Everything is graded over 20, where 10/20 is the average. The best student in class may have a 14/20. It was rare to get anything above 18/20 because we all always had something to improve. 19/20, the joke went, was for the teacher. And 20/20? For God.
This mentality wasn’t just reflected in grades, but feedback. At the end of each trimester, you would get a grade bulletin with feedback. I was one of the better students (always a nerd, no regrets!), and I remember getting the best grade average in my class for history – 17/20. My feedback? “Great. Can do better. Doesn’t speak up in class.”
For better or worse, the French system inoculated me against the need for perfect grades because perfection was never attainable, and it gave me a growth mindset because I knew I always had room to improve.
This did not translate to the American school system, and when I started college, I struggled because I didn’t understand grades. I remember taking Multivariable Calculus—a class Math majors typically took pass/fail because it was rare to get anything above a C—and receiving a C-. I quickly learned that many of my peers would have had a heart attack had they had that on their transcript.
At first I didn’t understand why. Wasn’t C the middle grade (A-B-C-D-F), therefore the average grade, equivalent to a 10/20? So most people’s GPAs should be around a C, right?
Wrong. I did not understand the school system at all, and according to many peers, I had shot my chances of ever getting into law school, a PhD, or a prestigious career like “consulting” because of this awful grade on my transcript.
This distressed me. I didn’t know what I wanted to do; I had no idea what consulting was or whether I would like it, but I wanted to keep my options open. So I tried much harder to get As on everything else to salvage my embarrassing GPA.
This foundation in process-oriented learning would prove crucial when I encountered America’s very different educational culture.
2. American Culture Shock: Where Average Becomes Failure
To remedy what suddenly seemed like something that would doom my future, I asked friends who did well in classes how they went about it, and some of their answers were illuminating. Most notably, I remember a friend saying, when writing essays, “I don’t argue what I believe, I argue what is most effective to argue to get a good grade.”
This sent me into a tailspin. I chose to go to the U.S. for college and graduate school because, after being in a very rigid school system, I wanted to be able to choose my classes, learn what I want, and develop my individuality. This was, simultaneously, both a true and a naive understanding of the U.S. higher education system. My friend’s way of getting good grades shattered the very reason I had chosen to go to college in the U.S. in the first place.
The reasons for pursuing fields with high GPA requirements were equally problematic. I remember one friend, a very academic and social justice-minded and kind person, who I adore, told me he wanted to go into consulting. I found it odd, as based on his interests, I thought he’d be an activist or in NGOs. He told me, “Consultants just do it for a couple of years, and then they do whatever they want. It guarantees success later in life and provides prestige. I will then start an NGO or a social enterprise.” That friend is still in consulting to this day, over a decade later, and continues to have mixed feelings about what they do in their life. This isn’t to say this is the case for all consultants or everyone following these paths with strict entryways, but perhaps some people would contribute differently to society if they had different incentives.
3. Chinese Extremes: One Test, One Future
When I taught in China, grade tunnel-vision was ubiquitous among students, not because they were shallow, but because that was the way the system was designed. The gaokao (national college entrance exam) determines your entire life trajectory with a single score. Students and families will do anything to gain an edge because the stakes are existential. One exam determines which university you attend, which determines your major, which largely determines your career prospects and social status for life. In this context, cheating isn’t a moral failing - it’s a rational response to an irrational system.
The parallels between the Chinese gaokao pressure and American grade obsession are striking. Both cultures have created education systems that value performance metrics over actual learning. In China, one test decides everything. In America, it’s a series of high-stakes assessments throughout college that determine prestigious graduate schools and career prospects. Same dysfunction, different mechanisms. Students learn to game the system because the system has been designed as a game rather than a learning experience.
4. The Universal Problem: When Metrics Replace Learning
Despite the various flavors of meritocracy instilled across countries, research confirms the misalignment between grades and professional careers. Hansen, Hvidman, and Sievertsen (2023), studying a nationwide grading reform in Denmark, found that while higher GPAs led to higher initial earnings after graduation, this effect diminished rapidly, becoming undetectable approximately three years into graduates’ careers. This suggests grades are imperfect predictors of long-term career success.
The societal de-emphasis of grades is easier said than done - we’d need employers, graduate schools, and professions to fundamentally change their screening criteria. But without addressing this root cause, any solution to cheating, whether via AI or not, is just treating symptoms. We can build all the AI detectors we want, but if students still believe their futures depend on perfect GPAs, they’ll find ways around any technological solution.
III. Is AI Leading to More Cheating?
If the incentives are aligned so that good grades become the minimum required filter for success, then people will do what they need to do to succeed. And for a lot of people, that doesn’t involve just sacrificing their own interests or views for the sake of good grades, but cheating altogether.
Cheating has always been a problem, and it was already increasing before LLMs became commercially available due to the issues discussed in the prior section. This was another disheartening fact I learned post-college. One classmate, I heard, had their parents, prominent academics, write all their papers for them.
Other students paid ghostwriters–and more than I realized. I found this out because friends who graduated a few years before me, right around the financial crisis, while seeking gigs and un(der)-employed, were recruited by people from a variety of campuses to write essays for them. A frat at a large, prestigious, university basically had them on retainer.
This underground economy was more extensive than commonly acknowledged. A 2018 study found that while the historic average for self-reported commercial contract cheating was 3.52%, samples from 2014 onwards showed this figure had alarmingly risen to 15.7%, indicating a substantial and growing problem with students paying third parties for assignments well before ChatGPT’s launch.
Current data shows the problems have persisted post-AI. Studies indicate overall cheating rates may have remained stable following ChatGPT’s release, with rates staying around 60-65% in high school students in both pre- and post-ChatGPT periods.
So why does AI feel so much more threatening if it isn’t increasing overall cheating rates? The answer lies in what AI changes about the nature of cheating itself.
First, AI democratizes cheating in unprecedented ways. Previously, sophisticated academic dishonesty required resources—money for ghostwriters, connections to smart friends, or privileged access to completed assignments. AI levels this playing field. Now, any student with internet access has instant access to a writing assistant far more sophisticated than most human tutors.
Second, AI transforms cheating from a discrete transaction to an integrated process. When students paid ghostwriters, there was a clear boundary between legitimate and illegitimate work. With AI, that boundary blurs. Students can use AI for “research,” “brainstorming,” “editing,” or “feedback,” making it harder to determine where assistance becomes cheating.
Third, and most unsettling for educators, AI exposes how much of our assessment system was already vulnerable. The fact that AI can so easily complete many assignments reveals that these tasks weren’t testing genuine understanding—they were testing the ability to produce certain formats of text. This realization is what makes AI feel existentially threatening to educators: it’s not just changing the rules of cheating, it’s revealing that our rules were already broken.
This suggests AI may not be creating new cheaters but enabling existing ones to operate more effectively and with fewer consequences. More concerning, it’s making visible the fundamental disconnect between what we say we’re testing (critical thinking, understanding, originality) and what we’re testing (the ability to produce academic-sounding text in response to prompts).
IV. Learning Among Distractions
There is already an attention problem with college education that is worse than when I was in college. Between COVID taking away formative learning years in ways we still may not understand and the ubiquity and addictive mess of phones, the environment is stacked against them.
The COVID pandemic exacerbated existing attentional challenges. Studies show a “pandemic-induced strain of student disengagement,” with students reporting less time studying and interacting with faculty, increased disconnection, elevated anxiety, drops in class attendance, and diminished help-seeking behaviors.
The attention crisis makes AI particularly appealing - it promises effortless solutions in a world where sustained attention feels impossible. But this crisis didn’t emerge in a vacuum. We’ve created educational environments that fragment attention with constant assessment, that reward speed over depth, and that prioritize performance over understanding. Is it any wonder that students reach for tools that promise to solve these problems instantly?
As a solidly non-cusp millennial, I relate to Gen Z in several less common ways across my cohort. One of them is growing up being surrounded by distractions. Not because I had a smartphone in school – I didn’t get my first one until after college, and I barely even brought my computers to class because it was clunky. Instead, it was because I had undiagnosed ADHD that I didn’t discover until my 30s. Because of this, specific types of ways of learning worked, and others did not.
My experience with ADHD gave me insight into how traditional educational structures fail students with different learning needs—problems that technology amplifies rather than solves. In this case, my distractions mainly came from my own neurology. Today, students are in an “ADHD environment,” unable to dissociate themselves from a constant barrage of information. The sources are different, but the result is similar – you have people less capable of concentrating.
In a society where attention is increasingly fragmented, we need educational structures that force sustained engagement and build the cognitive muscles necessary for critical thinking, writing, and problem-solving. Part of that involves getting rid of things that don’t work. My number one enemy: take-home exams.
V. Get Rid of (or Significantly Change) Take-Home Exams
When teaching at the university level in China, I was warned against giving students take-home exams because “cheating was rampant.” This behavior was backed by research – a 2019 systematic review found that take-home exams carry a significant risk of unethical student behavior for lower-level cognitive tasks, particularly those focusing on recall rather than higher-order thinking.
In my experience as an undiagnosed young adult with ADHD, take-home exams were useless, not so much because I cheated, but that I would procrastinate and do a shitty job in the end. I needed structured time limits to get any draft done, and all I could turn in was a first draft, not a finalized project, unless I had several deadlines to improve the draft. I was always in awe of my friends who could finish our take-home work weeks in advance with a perfect project.
Because of this, college, where almost all writing was in the form of a take-home essay, didn’t teach me how to write. I also cannot remember a single take-home project I ever did, including essays. I cannot remember what I argued or how I came to the conclusions. This is not true for other types of projects.
I want to be clear: I’m not advocating for the elimination of take-home assignments entirely. There is certainly a role for them in educational practice. However, these other types of learning should be prioritized because they build the intellectual muscle, resilience, and confidence students need to tackle take-home exams without (or significantly diminishing) the temptation to cheat. When students develop genuine competence through engaged, process-oriented learning, they approach independent work from a place of strength rather than desperation.
VI. Proven, AI-Proof, Educational Approaches
Right now, we’re asking the wrong question. Instead of “How do we stop students from using AI to cheat?” we should ask, “Why are our assignments so easily replaceable by algorithms?”
Based on my experiences across multiple educational systems as both student and teacher, here are the approaches I learned most from and that proved effective with my students:
3. Building Focus and Presence
A. In-School Intensive Exercises
In the French school system, you spend a lot of time at school, especially starting middle school (which starts in 6th grade). I was in class most days from 8 am to 6 pm. And then you had homework. Why was I in school so long? Because classes would usually be in blocks of 2 hours (sometimes more), where part of the time you attended an interactive lecture, and the other part you did in-class exercises.
The French school system culminates in the French baccalaureate, which inspired the International Baccalaureate that many people are more familiar with. It involves you doing in-person writing exercises within blocks of 4 hours. These exercises were almost exclusively short and long-form handwritten essays in which you had to cite works from memory.
These intensive, in-person exercises force students to internalize knowledge and develop fluency without external aids. Students can’t rely on Google, Wikipedia, or AI when they’re writing a four-hour handwritten essay from memory. This builds genuine competence that can’t be faked. The French approach recognizes that real learning requires struggle, requiring students to hold complex ideas in their heads and synthesize them under pressure.
The neuroscience supports this approach. The Levels of Processing model shows that deeper, semantic processing (engaging with meaning) leads to more elaborate, stronger, and durable memory traces compared to shallow processing. Sustained attention is crucial for learning and memory formation.
B. Small Seminars with No Hiding
Class size’s importance seems obvious, but bears emphasis. Research has shown that larger classes are associated both with more limited class participation and with lower grades.
In my college’s small seminar format, students couldn’t pretend to have done the reading. With only 8-12 students around a table, everyone had to contribute. The social pressure and immediate questioning forced preparation and engagement in ways that large lectures never could. You couldn’t hide in the back or check out mentally - every student was accountable for engaging with the material and their peers. You also had to participate as part of your grade.
To navigate large classes that required large lectures, there were breakout sessions with professors and a smaller group of 8-12 students to still provide them a more tailored experience.
2. Emphasizing Process Over Product
A. Credit the Journey, Not Just the Outcome
One of the clearest examples of how process-focused education differs from outcome-focused education came from my experience with mathematics. Despite math being my strongest subject through high school, as I mentioned earlier, I struggled significantly in my first calculus class at an American university—so much so that I ultimately abandoned my plan to minor in mathematics. The difference wasn’t just that the class was hard; the teaching approach felt fundamentally different. In the French education system, mathematics heavily emphasized proofs—you had to write out your complete thinking, showing all your logical steps so the professor could understand exactly how you arrived at your solution. In contrast, my American math experience placed much more weight on correct answers, even when partial credit existed for process.
This shift hit me particularly hard as someone with undiagnosed ADHD. In French math classes, when I made arithmetic errors despite understanding the underlying concepts, I would lose some points but not everything. Teachers recognized that I grasped the big picture; I just struggled with execution details. But in American college mathematics, those same mistakes were much more heavily weighted. The outcome overshadowed the understanding. This experience taught me that when we focus on how students think through problems rather than just whether they get the right answer, we create space for different types of minds to flourish—a principle that extends far beyond mathematics into all learning.
B. Workshop-Based Learning
I mentioned earlier that I didn’t learn how to write in college. Where did I learn then? At an internship.
The summer before my senior year I interned at a now-defunct DC think tank whose model was to run completely on unpaid interns. What we spent doing most of the time was writing about international affairs and editing each others’ work, and then publishing it. Suddenly, after suffering writing for the past three years, I felt like I finally got a hang of it due to iteration, integrating feedback, and editing and giving critical feedback to my peers.
When I came back to college and reconnected with one of my mentor-professors, he was stunned by how much my writing had improved, and he even wondered how a learning mechanism could be incorporated in the college as I had learned so much from this experience in a way that clearly college classes and resources were not benefiting me to the same way.
Workshopping creates active learning that’s inherently AI-resistant because it requires real-time engagement and response to feedback. Students can’t just submit AI-generated work and hope for the best - they must defend and revise it through group discussion. They also have to read their classmates’ work and provide constructive critiques, which in turn further teaches that what works and doesn’t.
The iterative nature of workshops, where writing is refined through multiple rounds of feedback and revision, mirrors how real professional writing actually works. This think tank experience taught me more about writing in one summer than three years of traditional college assignments.
C. Oral Assessments
Oral defenses of written work or oral exams force students to truly understand their material. You can’t use AI when you’re being questioned in real-time about your arguments and their implications.
My alma mater has an optional “honors” path that is inspired by Oxford tutorials. We would specialize in four subjects at the graduate level, culminating in a sit-down written exam followed by an oral exam defending our written answers to outside experts. This format tested not just our ability to write, but our deep understanding of the subject matter and our ability to think on our feet.
A 2024 systematic review found that oral assessments can effectively evaluate deep understanding, critical thinking, and communication skills, while also potentially reducing academic misconduct. Though they also noted concerns about student anxiety and the need for clear rubrics.
3. Making Learning Personal and Interactive
A. Assignments Connected to Personal Experience
Assignments that require students to connect academic content to their own lives and experiences are much harder to outsource to AI because they require genuine personal reflection and investment. When students must draw on their own experiences, cultural backgrounds, or personal stakes in the subject matter, the work becomes inherently more authentic and harder to fake. Also, people love to talk about themselves. This leverages the well-documented tendency for people to engage deeply when connecting academic content to personal experience.
I integrated this one semester by having my students’ final project be a mini-TED talk that had to draw on something they had learned from their own experience. The students talked about the benefits of cooking for themselves, their dedication to their favorite TV show (Friends was extremely popular in China), and some discussed very intimate personal experiences, including how they dealt with issues such as domestic violence or coming to terms with their own sexuality in college. It’s harder to make these things up, and having to present to your peers, your passion for the subject shows and sticks.
B. Role-Playing, Simulations, and Debates
The real-time nature of roleplaying, simulations and debate make them AI-proof - you can’t pause to consult ChatGPT in the middle of a heated debate. They’re also ridiculously fun and memorable.
During my tenure in China, I had students role-play different stakeholders in U.S. electoral politics to understand the 2015-2016 primary system. Each student had to research and embody a specific candidate or interest group over the course of the semester, then debate policy issues from that perspective. This required deep understanding of both their character’s positions and the underlying policy issues. After each debate, we debriefed – what worked, what didn’t, what they learned. My students loved this, and ended up with more nuanced understandings about U.S. politics. So much that we were able to get into nitty-gritty debates about what was happening in the U.S. that I had not had in China prior.
This method worked for me as well in graduate school, where I took several negotiations courses done by simulations. Each week we would be assigned a role we had to prepare for and negotiate according to that role’s interests and information. Afterwards, we would debrief and connect what we learned to the readings (which even though I tried to read always, I didn’t always get to it, and sometimes even forgot what they were about after reading–because of undiagnosed ADHD.) I may not remember a single think piece I wrote for a college seminar, but I will always remember the lessons learned from failing to properly advocate for my made-up small country by underestimating how much money made-up big country had and would be willing to give me.
These things working aren’t just my hunch, either. Research confirms the effectiveness of these interactive methods. A study on the use of role-playing in education found a significant positive overall effect on student learning, with particularly strong impacts on skill development and student satisfaction.
VII. Why Aren’t More People Doing This?
These methods are by no means new – and that’s a good thing, because it means we don’t need to completely reinvent the wheel to adapt to AI’s impact to higher education. They do, however, share a common thread: they require sustained attention, human interaction, and real-time thinking.
Why don’t more professors adopt these approaches? Because they require more time, smaller class sizes, and more intensive faculty involvement. It’s easier to assign generic essays that can be graded with standardized rubrics. The preference for take-home assignments isn’t pedagogical - it’s practical, driven by budget constraints and faculty overwork.
Economic constraints play a significant role. Faculty workloads typically allocate 40% to teaching, 40% to research, and 20% to service. With research pressure for career advancement, time and resources for labor-intensive active learning strategies are limited.
This connects back to the central argument: The solution to AI cheating isn’t better detection software or stricter policies. It’s addressing the fundamental problems in higher education that make both students and professors choose convenience over learning. Universities have structured themselves in ways that incentivize the very behaviors we now panic about when AI makes them easier.
VIII. Conclusion
The AI panic is a symptom, not the disease. We’ve built educational systems that prioritize perfect products over learning processes, credentials over competence, and efficiency over engagement. When students can complete assignments with AI, it exposes that these assignments weren’t measuring genuine understanding in the first place.
But this crisis also presents an opportunity. We already know how to create educational experiences that are both AI-resistant and more effective at developing real skills. My journey through French, Chinese, and American educational systems revealed that the most transformative learning happens when students must be present, engaged, and thinking in real-time.
The solution isn’t better detection technology—it’s better educational design. When we prioritize sustained attention over fragmented multitasking, process over product, and human interaction over algorithmic efficiency, we create learning environments where students don’t want to cheat because the experience itself is valuable.
This isn’t just about education; it’s about democracy. In a world increasingly dominated by algorithmic thinking, we need citizens who can think critically, engage deeply, and grapple with complexity. These are fundamentally human capacities that cannot be outsourced to AI.
The question isn’t whether we can build more sophisticated AI detectors. The question is whether we can build educational experiences so engaging that students choose learning over shortcuts. The future of education—and our democracy—depends on getting this answer right.
AI flattens people into data. That makes democratic virtues like charity and commitment very difficult.
I really like this approach. I've really started to dive into alternative grading for this very reason!