A recent AI cheating episode in one of Belgium’s central exams has reignited a debate that's been simmering in education for years: how do we maintain exam integrity in the digital age?
In early July 2025, three students were caught using ChatGPT during Flanders' medical entrance exam, this was the first time such cheating was detected in these high-stakes tests. What started as an isolated incident quickly escalated when the exam's unusually high pass rate (47%, double the previous year) raised suspicions of widespread fraud.
As everyone who took part in this exam knew the right integrity barriers were not in place, students who narrowly missed admission began questioning whether others had cheated their way to success. The controversy led to legal action from at least 19 candidates, an investigation into potentially close to a hundred cases, and ultimately the resignation of the examination board's chair.1
The incident exposed a fundamental flaw: the digital exam system allowed students to access AI tools during the test, while supervisors watching from the back of the room remained powerless to detect it.
For institutions that dismissed lockdown browsers as unnecessary, the incident was a wake-up call. But the reaction of a large part of the sector – shifting back toward pen-and-paper exams to avoid AI risks – is equally misguided. We don't need to choose between unsecured digital exams and abandoning digital assessment entirely.
There's a third way. And it starts with understanding the fundamental principles that make digital exams genuinely secure.
What went wrong?
The Belgian incident wasn't just about students being clever or AI being powerful. It was a sign of gaps in the current setup that turned a high-stakes medical entrance exam into a case study of what not to do.
The exam was conducted digitally across 70 different locations, with over 5,000 students taking it simultaneously on computers. But the security architecture had critical flaws.
The exam was conducted on computers, but the security architecture had critical flaws that allowed students to access the internet and reach ChatGPT during the test. Anyone who tried could access AI assistance, and many apparently did.
There was no file isolation, no application control, and no system-level enforcement. Just supervision from the back of the room and the hope that students would stay honest - a method that has long been seen as the most effective deterrent for cheating.
The examination board's chair admitted they couldn't check everyone's search history due to the scale of the exam. Investigators found that very little usable data remained on the physical exam computer labs, and what they did recover was highly fragmented. The architecture made cheating easy to do and nearly impossible to prove afterward.
Layer supervision on top of these vulnerabilities, and you're essentially hoping students choose not to cheat. Hope is not a strategy.
The false dichotomy: supervision vs. paper
For years, we've heard the same refrain from educational institutions: "We have supervisors watching. That's enough." What happened in Flanders’ medical exam proved what many of us already knew: it's not enough, not even close.
When AI tools can generate answers in seconds, when students can access LLM’s through their browser, when communication apps run silently in the background, a supervisor watching screens from the back of the room is essentially powerless.
But now the pendulum has swung to the opposite extreme. "Let's just go back to pen and paper!" some institutions are saying. This reaction is equally problematic. It throws away the richer assessment possibilities that digital tools enable, the efficiency gains in grading and exam administration, and the scalability that modern education demands.
The truth is more nuanced: you can have secure and scalable digital exams if you build on the right principles.
The 5 pillars of AI-proof assessments
Let's break down what is proven to make a digital exam secure. These aren't nice-to-haves, they're the foundation that prevents the kind of cheating we saw in Flanders’ medical entrance exam.
1. Exam Browser vs. Normal Browser
The truth is: normal browsers are not designed for exams. They're built for freedom, for multitasking, for productivity. During an exam, those features become vulnerabilities.
Think about it. A standard browser, even with restrictions like full-screen mode or screen recording extensions, still allows students to open locally installed applications, run background processes, access remote desktop software, and use browser extensions with built-in AI assistants.
And here's where it gets particularly insidious: AI-enhanced browsers like ChatGPT's new Atlas browser have integrated AI assistants that don't require visiting another website or installing an extension. Students can access these tools without leaving their exam environment and without triggering any obvious red flags.
Purpose-built exam browsers make the difference. These specialised tools enforce restrictions at the system level, not just the browser level. They can:
- Block other applications from launching
- Disable keyboard shortcuts that might open other programs
- Detect when students are running the exam in a virtual machine
- Control internet access with surgical precision
- Monitor for suspicious activity patterns
This is about creating an exam environment that truthfully reflects the controlled conditions of a physical testing room.
2. Controlled vs. Open Internet
Many exams today allow students full internet access, trusting that supervision will catch any cheating. As we've seen, this trust is misplaced.
Open internet creates open opportunities for cheating. Without proper controls, AI-powered tools are a keystroke away, students can communicate with others in real-time, and someone else could even take the exam remotely. The moment unrestricted internet access is given you've essentially opened every door that exam security tries to close.
But here's the key insight: the goal isn't to ban the internet entirely. Complete isolation isn't necessary and, in many cases, it's pedagogically undesirable. Modern assessments often legitimately require students to access certain online resources.
The solution is controlled access through an allowlist approach:
- Educators specify exactly which websites and resources students may access
- Everything else is automatically blocked
- The security system operates at the network level, not just within the browser
- Even background applications can't circumvent these restrictions
This gives you the benefits of digital, internet-connected exams – with access to authentic tools, real-world resources, and dynamic content - without abandoning security.
3. Isolated Files vs. Local Files
Here's a vulnerability that often flies under the radar: local file access.
When students can access their computer's file system during an exam, they retain access to everything stored on their device, such as personal documents with pre-written answers, study materials saved locally, even screenshots of previous exam questions. The file system becomes an uncontrolled repository of information that supervision alone cannot monitor or prevent.
And this applies just as much to institution-managed computers, because local storage on a lab machine can still contain cached documents, synced files, shared folders, or leftover data from previous users.
Allowing file explorer access or letting students open and save files locally is never secure. The risk of information leaks is simply too high.
But students need to work with files during exams. They need to download exam materials, upload their answers, perhaps work with datasets or documents. How do you reconcile this need with security?
The answer is isolation. Files should live in a secure workspace that's completely separated from the student's personal file system. In this isolated environment:
- Students can download exam materials without accessing personal files
- Their work is saved securely without touching their local storage
- Submissions are handled within the controlled environment
- The exam workspace is completely untethered from their device's file system
Beyond security, this approach has an often-overlooked benefit: it prevents data loss and missed submissions. When everything happens in a managed environment, there's no "my computer crashed and I lost everything" or "I forgot to submit my file."
4. Virtual Apps vs. Local Apps
This is where exam security gets really interesting and where many institutions face their biggest challenge.
What happens when an exam requires specialised third-party desktop applications alongside the assessment platform? When students need Excel for data analysis, MATLAB for engineering work, RStudio for statistics, SPSS for research, or VS Code for programming tasks?
Most institutions take what seems like the practical approach: they tell students to use their locally installed applications. But local applications are fundamentally unrestricted.
When students use locally installed software during exams, they can access personal files stored in cloud-based drives through the application, use built-in AI tools like Microsoft Copilot in Office, pull data from the internet if the application has that capability, access file explorer and preview documents through the application's open/save dialogs, and sometimes even communicate with others through the application. The software becomes another pathway around exam security, operating largely beyond the reach of supervision or browser-level controls.
And that's just the security issues. There are also practical problems that affect every exam administration. Different software versions across student devices create inconsistent experiences, making it unclear whether students are working with the same tools and capabilities. Performance gaps between powerful and basic computers affect exam fairness, with some students facing lag or crashes while others work smoothly. Setup issues and technical difficulties eat into exam time, turning what should be an assessment into a troubleshooting session. Troubleshooting becomes a nightmare when every student has a different configuration, leaving exam administrators scrambling to solve device-specific problems under time pressure.
The solution is virtualisation. By running applications inside a secure virtual workspace – in other words: in a controlled, browser-based environment - educators can:
- Apply specific restrictions to each application
- Ensure every student uses the same software version
- Provide consistent performance regardless of device capabilities
- Control file access within the application
- Disable built-in AI assistants and other risky features
- Maintain exam integrity without sacrificing authenticity
This doesn't mean students need high-end computers. The processing happens in the cloud, and the interface streams to their device. A student with a basic laptop has the same experience as one with a powerful workstation.
5. The System-Level Approach
Here's what ties all these principles together: real security happens at the system level, not the application level.
When you try to secure exams through browser settings, LMS configurations, or honor codes, you're essentially asking students to secure their own exams. You're relying on barriers that tech-savvy students can easily circumvent.
System-level security means:
- Restrictions are enforced by the operating system, not just software
- Students can't simply disable or bypass security features
- The entire device operates in a controlled exam mode
- Even background processes and system-level applications are managed
- Security continues even if the main exam application loses focus
This is the fundamental architecture that makes the other four principles possible. Without system-level enforcement, every security measure becomes optional. Importantly, system-level security is also future-proof: while AI tools evolve rapidly, the operating system controls that prevent their use remain stable. New AI capabilities don't change how system-level restrictions function.
The middle ground: authentic exams with real security
What this means in practice is that you can create exams that use modern digital tools and authentic assessments or open book exams, require specialised software like Microsoft Office, MATLAB, or SPSS, allow controlled internet access to relevant resources, let students work with files and datasets, and provide consistent experiences across different devices. All while preventing AI cheating and maintaining exam integrity.
Unlike AI detectors that colleges and teachers use to scan completed work, this prevention-based architecture addresses the fundamental question institutions are asking: not 'can we detect if work used AI?' but 'how do we create conditions where using AI during exams simply isn't possible?' This shifts the focus from after-the-fact detection to real-time prevention.
You don't need to choose between security and authenticity. You don't need to go back to pen and paper to prevent AI use in exams. And you don't need invasive monitoring that makes students feel like they're under surveillance.
What you need is architecture built on the right principles from the ground up.
Security that enables
Let’s be clear: the goal isn't to create a police state in digital education. It's not about assuming students are criminals or treating them with suspicion.
The goal is to create an environment where:
- Honest students aren't disadvantaged
- Academic credentials retain their value
- Institutions can trust their assessment results
- Students who want to cheat find it genuinely difficult to do so
When security is built on the right principles, it becomes invisible to students who are simply trying to take their exam honestly. They don't notice the controls because the controls aren't interfering with legitimate activity.
But students who attempt to cheat - whether by accessing AI tools, communicating with others, or using unauthorised resources - find that the architecture simply doesn't allow it. Not because someone is watching them, but because the system is built correctly from the foundation up.
The time for half-measures is over
The Flemish case should be a wake-up call, but not in the way some people think. The answer isn't to abandon digital exams. It's not to go back to pen and paper. It's not even to add more supervisors or stricter honor codes.
The answer is to build digital exam systems on proven principles that work.
These five principles aren't theoretical ideals. They're proven approaches that institutions around the world use every day to conduct secure, authentic digital exams.
The question isn't whether secure digital exams are possible. They are. The question is whether your institution is willing to implement the principles that make them secure.
Because in an age of powerful AI, half-measures don't cut it anymore. You need either real security or pen and paper. There's no longer a middle ground that works, except the middle ground of doing digital exams correctly.
Ready to Implement the 5 Pillars of AI-Proof Assessments?
At Schoolyear, we've built our entire platform around these five pillars. We help educational institutions conduct authentic, secure digital exams without invasive monitoring or the limitations of traditional lockdown approaches.
Want to see how these principles work in practice? Contact us for a demo or learn more about our Safe Exam Workspace.
Because your students and your academic standards deserve better than hoping supervision is enough.
1 Read more about this incident here:
- "Vijf kandidaten voor toelatingsexamen arts worden geschrapt uit rangschikking wegens fraude," De Standaard, September 12, 2025.
- "More than 100 students used internet during medical entrance exam," VRT NWS, September 5, 2025.
- "Meer dan honderd kandidaten speelden vals bij toelatingsexamen geneeskunde: 25 moeten verschijnen voor examencommissie," Het Laatste Nieuws, August 2025.
- "Raad van State: artsenexamen, examencommissie geschorst," VRT NWS, November 3, 2025.

.jpg)