Rethinking Assessments for a GenAI-Native World

by Shiva Kakkar

|

July 2, 2025

6 min read

The most dangerous myth in higher education today is that learning ends with an exam. In an AI-native world, that assumption is not just outdated—it's irresponsible. With students now capable of generating entire reports, analyses, and presentations using GenAI tools like ChatGPT, the traditional exam format has lost both its integrity and its purpose. The problem isn't that students are using AI. The real problem is that our assessments still pretend they aren't.

At Jaipuria AI Labs, we decided to stop patching up the old system. Instead, we rewired it completely—around how learning actually happens when Generative AI becomes part of the workflow. This redesign rests on three integrated frameworks: ACE, TRUST, and MIST. Together, they shift the focus from AI detection to AI integration—from catching misuse to designing better assessments.

The ACE framework (Authorized, Conditional, Excluded) helps faculty define how much AI use is permitted in each assignment. No guesswork. No grey zones. For open-ended, creative tasks, AI use may be fully authorized. For more technical or critical reasoning exercises, it might be allowed conditionally—with specific tools, prompts, or documentation. And for certain foundational skills, AI might be excluded altogether. ACE doesn't assume AI is good or bad—it assumes that faculty know their course outcomes best and gives them the structure to make smart choices.

But policies alone aren't enough. Misuse happens, and when it does, it must be handled with fairness and consistency. That's where the TRUST framework comes in—Track, Review, Understand, Substantiate, and Take action. It's an investigation model that prioritizes conversation over accusation, placing the student's voice at the center. Instead of relying on unreliable detection tools, faculty look for patterns, anomalies, and inconsistencies, and then engage students in a structured review process. It's transparent, respectful, and grounded in evidence.

The most powerful piece of the redesign, though, is MIST. It flips the whole idea of assessment on its head. Rather than designing tasks AI could easily complete, MIST assessments are Multidimensional, Iterative, Situated, and Traceable. Students must compare outputs from different AI tools, iterate across multiple sessions, apply AI to deeply personal contexts, and log every step of their thinking. The result? They learn to use GenAI not as a shortcut—but as a thinking partner. It raises the cognitive demand, not lowers it.

A finance student, for instance, might run multiple investment simulations using GenAI, reflect on tradeoffs, and build a portfolio suited to their personal risk appetite. A marketing student might design a brand campaign using three different GenAI tools, documenting their process and evaluating outputs before refining. These aren't just assignments—they're simulations of how modern work really gets done.

This is what it means to be AI-native. Not just using GenAI, but building pedagogy around its presence. At Jaipuria AI Labs, we're not waiting for AI to disrupt education—we're designing education that works with AI from the ground up. Because the real test isn't whether students can memorize information. It's whether they can ask better questions, navigate uncertainty, and apply intelligent tools with judgment.

Assessment doesn't need to be abolished. It needs to evolve. And we're showing what that evolution looks like—one prompt, one protocol, one reimagined exam at a time.