Digital exams promise efficiency, scalability, and authentic assessment. But without proper security architecture, they can become vectors for widespread academic fraud. Recent incidents of AI-enabled cheating in high-stakes exams have exposed a critical gap: the difference between conducting exams digitally and conducting them securely.
These incidents share a common pattern: digital exam systems that allow students to access AI tools during tests, while supervisors watching from the back of the room remain powerless to detect it.
For institutions that dismissed lockdown browsers as unnecessary, these incidents serve as wake-up calls. But the reaction of a large part of the sector – shifting back toward pen-and-paper exams to avoid AI risks – is equally misguided. We don’t need to choose between unsecured digital exams and abandoning digital assessment entirely.
There’s a third way. And it starts with understanding the fundamental principles that make digital exams genuinely secure.
What went wrong?
Recent AI cheating incidents weren’t just about students being clever or AI being powerful. They’re signs of gaps in exam security setups that turn high-stakes assessments into case studies of what not to do.
In several documented cases, exams were conducted digitally across multiple locations with thousands of students taking them simultaneously on computers. But the security architecture had critical flaws that allowed students to access the internet and reach ChatGPT during tests. Anyone who tried could access AI assistance.
There was no file isolation, no application control, and no system-level enforcement. Just supervision from the back of the room and the hope that students would stay honest - a method that has long been seen as the most effective deterrent for cheating.
In these cases, examination boards often couldn’t check everyone’s search history due to scale. Investigators found that very little usable data remained on physical exam computer labs, and what they did recover was highly fragmented. The architecture made cheating easy to do and nearly impossible to prove afterward.
Layer supervision on top of these vulnerabilities, and you’re essentially hoping students choose not to cheat. Hope is not a strategy.
The false dichotomy: supervision vs. paper
For years, we’ve heard the same refrain from educational institutions: “We have supervisors watching. That’s enough.” Recent incidents have proven what many of us already knew: it’s not enough, not even close.
When AI tools can generate answers in seconds, when students can access LLM’s through their browser, when communication apps run silently in the background, a supervisor watching screens from the back of the room is essentially powerless.
But now the pendulum has swung to the opposite extreme. “Let’s just go back to pen and paper!” some institutions are saying. This reaction is equally problematic. It throws away the richer assessment possibilities that digital tools enable, the efficiency gains in grading and exam administration, and the scalability that modern education demands.
The truth is more nuanced: you can have secure and scalable digital exams if you build on the right principles.
The 5 pillars of AI-proof assessments
Let’s break down what is proven to make a digital exam secure. These aren’t nice-to-haves, they’re the foundation that prevents the kind of AI-enabled cheating we’re seeing in digital assessments.
1. Exam Browser vs. Normal Browser
The truth is: normal browsers are not designed for exams. They’re built for freedom, for multitasking, for productivity. During an exam, those features become vulnerabilities.
Think about it. A standard browser, even with restrictions like full-screen mode or screen recording extensions, still allows students to open locally installed applications, run background processes, access remote desktop software, and use browser extensions with built-in AI assistants.
And here’s where it gets particularly insidious: AI-enhanced browsers like ChatGPT’s new Atlas browser have integrated AI assistants that don’t require visiting another website or installing an extension. Students can access these tools without leaving their exam environment and without triggering any obvious red flags.
Purpose-built exam browsers make the difference. These specialised tools enforce restrictions at the system level, not just the browser level. They can:
- Block other applications from launching
- Disable keyboard shortcuts that might open other programs
- Detect when students are running the exam in a virtual machine
- Control internet access with surgical precision
- Monitor for suspicious activity patterns
This is about creating an exam environment that truthfully reflects the controlled conditions of a physical testing room.
2. Controlled vs. Open Internet
Many exams today allow students full internet access, trusting that supervision will catch any cheating. As we’ve seen, this trust is misplaced.
Open internet creates open opportunities for cheating. Without proper controls, AI-powered tools are a keystroke away, students can communicate with others in real-time, and someone else could even take the exam remotely. The moment unrestricted internet access is given you’ve essentially opened every door that exam security tries to close.
But here’s the key insight: the goal isn’t to ban the internet entirely. Complete isolation isn’t necessary and, in many cases, it’s pedagogically undesirable. Modern assessments often legitimately require students to access certain online resources.
The solution is controlled access through an allowlist approach:
- Educators specify exactly which websites and resources students may access
- Everything else is automatically blocked
- The security system operates at the network level, not just within the browser
- Even background applications can’t circumvent these restrictions
This gives you the benefits of digital, internet-connected exams – with access to authentic tools, real-world resources, and dynamic content - without abandoning security.
3. Isolated Files vs. Local Files
Here’s a vulnerability that often flies under the radar: local file access.
When students can access their computer’s file system during an exam, they retain access to everything stored on their device, such as personal documents with pre-written answers, study materials saved locally, even screenshots of previous exam questions. The file system becomes an uncontrolled repository of information that supervision alone cannot monitor or prevent.
And this applies just as much to institution-managed computers, because local storage on a lab machine can still contain cached documents, synced files, shared folders, or leftover data from previous users.
Allowing file explorer access or letting students open and save files locally is never secure. The risk of information leaks is simply too high.
But students need to work with files during exams. They need to download exam materials, upload their answers, perhaps work with datasets or documents. How do you reconcile this need with security?
The answer is isolation. Files should live in a secure workspace that:
- Exists only for the duration of the exam
- Contains only the materials the educator provides
- Cannot access the student’s personal file system
- Stores student work in a controlled environment
- Automatically disappears after the exam ends
This approach maintains the functionality students need while eliminating the security risks of local file access. Students can download exam materials, work with documents and datasets, and upload their answers - all within a controlled environment that doesn’t touch their personal files.
4. Application Security: Virtualisation vs. Local Installation
Here’s a vulnerability that often catches institutions off guard: locally installed applications.
When students use applications installed on their computers - even legitimate educational software like Microsoft Office, MATLAB, or SPSS - those applications become potential security vulnerabilities during exams. Students can access personal files stored in cloud-based drives through the application, use built-in AI tools like Microsoft Copilot in Office, pull data from the internet if the application has that capability, access file explorer and preview documents through the application’s open/save dialogs, and sometimes even communicate with others through the application. The software becomes another pathway around exam security, operating largely beyond the reach of supervision or browser-level controls.
And that’s just the security issues. There are also practical problems that affect every exam administration. Different software versions across student devices create inconsistent experiences, making it unclear whether students are working with the same tools and capabilities. Performance gaps between powerful and basic computers affect exam fairness, with some students facing lag or crashes while others work smoothly. Setup issues and technical difficulties eat into exam time, turning what should be an assessment into a troubleshooting session. Troubleshooting becomes a nightmare when every student has a different configuration, leaving exam administrators scrambling to solve device-specific problems under time pressure.
The solution is virtualisation. By running applications inside a secure virtual workspace – in other words: in a controlled, browser-based environment - educators can:
- Apply specific restrictions to each application
- Ensure every student uses the same software version
- Provide consistent performance regardless of device capabilities
- Control file access within the application
- Disable built-in AI assistants and other risky features
- Maintain exam integrity without sacrificing authenticity
This doesn’t mean students need high-end computers. The processing happens in the cloud, and the interface streams to their device. A student with a basic laptop has the same experience as one with a powerful workstation.
5. The System-Level Approach
Here’s what ties all these principles together: real security happens at the system level, not the application level.
When you try to secure exams through browser settings, LMS configurations, or honor codes, you’re essentially asking students to secure their own exams. You’re relying on barriers that tech-savvy students can easily circumvent.
System-level security means:
- Restrictions are enforced by the operating system, not just software
- Students can’t simply disable or bypass security features
- The entire device operates in a controlled exam mode
- Even background processes and system-level applications are managed
- Security continues even if the main exam application loses focus
This is the fundamental architecture that makes the other four principles possible. Without system-level enforcement, every security measure becomes optional. Importantly, system-level security is also future-proof: while AI tools evolve rapidly, the operating system controls that prevent their use remain stable. New AI capabilities don’t change how system-level restrictions function.
The middle ground: authentic exams with real security
What this means in practice is that you can create exams that use modern digital tools and authentic assessments or open book exams, require specialised software like Microsoft Office, MATLAB, or SPSS, allow controlled internet access to relevant resources, let students work with files and datasets, and provide consistent experiences across different devices. All while preventing AI cheating and maintaining exam integrity.
Unlike AI detectors that colleges and teachers use to scan completed work, this prevention-based architecture addresses the fundamental question institutions are asking: not ‘can we detect if work used AI?’ but ‘how do we create conditions where using AI during exams simply isn’t possible?’. This shifts the focus from after-the-fact detection to real-time prevention.
You don’t need to choose between security and authenticity. You don’t need to go back to pen and paper to prevent AI use in exams. And you don’t need invasive monitoring that makes students feel like they’re under surveillance.
What you need is architecture built on the right principles from the ground up.
Security that enables
Let’s be clear: the goal isn’t to create a police state in digital education. It’s not about assuming students are criminals or treating them with suspicion.
The goal is to create an environment where:
- Honest students aren’t disadvantaged
- Academic credentials retain their value
- Institutions can trust their assessment results
- Students who want to cheat find it genuinely difficult to do so
When security is built on the right principles, it becomes invisible to students who are simply trying to take their exam honestly. They don’t notice the controls because the controls aren’t interfering with legitimate activity.
But students who attempt to cheat - whether by accessing AI tools, communicating with others, or using unauthorised resources - find that the architecture simply doesn’t allow it. Not because someone is watching them, but because the system is built correctly from the foundation up.
The time for half-measures is over
These incidents should be a wake-up call, but not in the way some people think. The answer isn’t to abandon digital exams. It’s not to go back to pen and paper. It’s not even to add more supervisors or stricter honour codes.
The answer is to build digital exam systems on proven principles that work.
These five principles aren’t theoretical ideals. They’re proven approaches that institutions around the world use every day to conduct secure, authentic digital exams.
The question isn’t whether secure digital exams are possible. They are. The question is whether your institution is willing to implement the principles that make them secure.
Because in an age of powerful AI, half-measures don’t cut it anymore. You need either real security or pen and paper. There’s no longer a middle ground that works, except the middle ground of doing digital exams correctly.
Ready to Implement the 5 Pillars of AI-Proof Assessments?
At Schoolyear, we’ve built our entire platform around these five pillars. We help educational institutions conduct authentic, secure digital exams without invasive monitoring or the limitations of traditional lockdown approaches.
Want to see how these principles work in practice? Contact us for a demo or learn more about our Safe Exam Workspace.
Because your students and your academic standards deserve better than hoping supervision is enough.

.jpg)