Assessment in the Age of AI
How learning designers can rethink assessment for the generative era
The challenge for assessment design
AI has made good writing easy to fake.
When students can produce coherent, well-reasoned answers in seconds using tools like ChatGPT, some of our most established forms of assessment suddenly feel fragile.
This is especially true for:
Essays asking students to “discuss” or “critically evaluate”
Short answer tasks focused on summarising theory.
Case responses requiring generalised reasoning.
These are still important skills. But they’re also tasks AI is increasingly good at performing — and doing so convincingly.
As learning designers, the challenge we now face is this: How can we design assessment tasks that align with intended outcomes, support authentic learning, and still hold pedagogical value in an AI-enabled world?
Why this matters for learning designers
Academic integrity policies, detection tools, and plagiarism penalties aren’t enough on their own. In fact, many institutions are now actively moving away from using AI detection at all, given its unreliability and ethical concerns.
What’s needed is a shift in design — especially for online and blended courses, where assessments are often asynchronous and remote.
And that shift needs to start with us.
Aligning assessment with outcomes
Our approach at Learning Design Solutions is grounded in the principle of constructive alignment (Biggs, 1996). We always start by asking:
What are students meant to be able to do by the end of this module?
How can that be demonstrated authentically?
What kind of task best supports that demonstration — and makes use of the context the learner brings?
We work closely with subject matter experts (SMEs) to reframe tasks so they require more than summarising ideas. Instead, we’re aiming for:
Judgement
Application
Synthesis
Reflection
Original decision-making
Types of assessment that hold up in the age of AI
Some examples of tasks we’ve helped design that are less susceptible to AI misuse and better aligned to real-world skills:
Contextualised outputs
E.g. “Draft a communication strategy for your current organisation” or “Apply this theory to a challenge from your own work.”
Process-based tasks
E.g. submission includes draft stages, decision points, and planning commentary.
Judgement tasks
E.g. “Compare two approaches and recommend the better option for a given scenario.”
Reflection and critique
E.g. “Analyse how your own thinking has shifted over the module in response to this theory or case.”
Creative synthesis
E.g. learners build a framework, design a proposal, or construct a new categorisation.
These tasks aren’t immune to AI support — but they make it much harder to delegate thinking. And they create opportunities for students to demonstrate their own experience, voice, and perspective.
AI can be built into assessment, not avoided
Instead of banning AI, many institutions are now embedding it within assessments. This not only reflects real-world practice but also helps students develop essential AI literacy.
Some design approaches we’ve used:
Ask students to critique or improve an AI-generated response.
Require students to reflect on how they used AI, including prompts and evaluation.
Let students use AI, but focus marking criteria on judgement, customisation, or application.
The goal is not to stop students using AI — it’s to design tasks where using AI still requires learning.
Tiered assessment models: A new standard?
Some of the most interesting recent practice includes tiered assessment models, where assessment tasks are categorised as:
AI-enabled – Use encouraged with transparency.
AI-limited – Specific tools may be allowed.
AI-prohibited – Designed to assess unaided performance.
This gives clarity to students, flexibility to course teams, and structure to the assessment strategy across a programme.
Frameworks like the AI Assessment Scale (AIAS) trialled at British University Vietnam have shown promising results: higher quality submissions, reduced misconduct, and increased learner confidence.
What the research is telling us
A growing body of work supports what we’ve observed in practice:
Process transparency matters. Asking students to document how they worked (and how they used AI) strengthens integrity.
Authentic tasks alone aren’t enough. Even scenario-based assessments can be gamed if they rely on predictable formats.
Assessment must shift alongside pedagogy. Structural change — not just new policies — is what makes a difference.
AI detection tools are unreliable. Ethical, thoughtful design is more effective than technological policing.
(For citations, see: Ilieva et al., 2025; Kofinas et al., 2024; HEPI, 2025; Luo et al., 2025)
Final thoughts for learning designers
AI hasn’t made assessment impossible. But it has made poorly designed assessment obsolete.
As learning designers, we’re in a strong position to lead this transformation. We can:
Help SMEs reframe their tasks
Offer better scaffolding and examples
Design marking criteria that reward thinking, not just reproduction
Advocate for assessment to evolve in line with authentic learning outcomes
This is a chance to create deeper, more relevant, and more engaging assessment — and to build student trust by making the purpose of assessment clear.
Want to redesign your assessment strategy with us?
Explore our AI-supported demo course:
View in MoodleCloud (guest access)
References
Biggs, J. (1996). Enhancing teaching through constructive alignment. Higher Education, 32(3), 347–364. https://doi.org/10.1007/BF00138871
HEPI (2025). Transforming higher education learning, assessment and engagement in the AI revolution: The how. Higher Education Policy Institute. Available at: https://www.hepi.ac.uk/2025/07/14/transforming-higher-education-learning-assessment-and-engagement-in-the-ai-revolution-the-how/
Ilieva, J., Shrestha, P. and So, H.-J. (2025). A framework for generative AI-driven assessment in higher education: Towards a responsible use of AI in learning and evaluation. Information, 16(6), 472. https://doi.org/10.3390/info16060472
Kofinas, A., Williams, J. and McPhail, R. (2024). The impact of generative AI on the academic integrity of authentic assessments. British Journal of Educational Technology. https://bera-journals.onlinelibrary.wiley.com/doi/full/10.1111/bjet.13585
Luo, J., Yang, Y., Wu, Y. and Lee, J. (2025). Design and assessment of AI based learning tools in higher education: A systematic review. International Journal of Educational Technology in Higher Education, 22(1). https://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-025-00540-2
Nguyen, N., et al. (2024). The AI Assessment Scale: Supporting academic integrity and innovation in higher education assessments. arXiv preprint. https://arxiv.org/abs/2403.14692
Wiggins, G. and McTighe, J. (2005). Understanding by Design (Expanded 2nd ed.). ASCD.