Lessons Learnt Experimenting with AI in Assessment - from institutions that have.
Reflections from University of Kent's Digitally Enhanced Education Webinar
Ever since ChatGPT burst onto the scene as the poster child of generative AI, universities have been grappling with how to respond. As Generative AI (GenAI) tools multiply and their capabilities expand, higher education has been scrambling to make sense of what some see as both a revolutionary opportunity, and others a looming threat.
And in lieu of policy directives, or mandates, institutional responses have been diverse. Some chose to ignore it hoping to avoid knee jerk reactions. Some chose to ban it. And some chose to work with it, to varying degrees.
Yet as we learn more of the capabilities and use, and as it grows more sophisticated, our initial responses have quickly become outdated. Higher education providers are waking up to the fact that ignoring or banning AI is untenable and are now evolving their institutional positions to work with it.
As some institutions shift gears to unveil new AI policy positions this September, it’s worth noting that others have already done so and have plenty of practiced wisdom to learn from. They’ve been there, done that, and got the AI-generated t-shirt.
Photo: Zoya Yasmine / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
The most recent University of Kent AI webinar was a great reminder of this, serving up nine institutional postcards offering different views from differently recently travelled AI journeys in assessment. There were many take-homes from this session, but the following felt the most salient, and particularly useful for institutions just starting a new journey of engagement of their own.
Guidance not Policy
Most speakers advocated for an approach rooted in principles rather than policy. Institutions are always keen to have consistent practice motivated by compliance, governance, and efficiency. Yet, when institutions house multiple faculties and subjects, it’s impossible to shoehorn everyone into one policy that works across all of them.
Within this, autonomy matters. Schools, Programmes and Modules, require the flexibility and the trust to find their ‘best’ AI approach. This doesn’t mean that guidance isn’t helpful- but rather it shouldn’t be mandated or prescriptive.
Perhaps the best solution mentioned was adopting an institutional default position, with module convenors empowered to re-write or override the statement if they wish. In practice, ~90% of module convenors tend to default, but it empowers others to embed GenAI at their discretion. The key here is trusted flexibility. That staff are trusted and empowered to teach and assess how they see best.
Institutions also need to ask themselves do they want to be AI Guiders or Police? Policy approaches require policing, enforcement, and resources- which some institutions might not have- and requires a mandated moral authority which institutions (in my opinion) shouldn’t have. A guidance-by-principles approach enables a better pedagogical ethic.
Clarity Clarity Clarity
Whatever position is taken; it must be clearly communicated. It’s not good enough to bury convoluted documents on SharePoint and expect students (and staff!) to figure it out.
The best practices are those which do two things; they embed clear, well-presented guidance into module pages (Moodle/Blackboard etc) where students can easily find it, and they signpost principles within the classroom- in the teaching- so to explain and advise stances clearly.
Institutions have adopted different approaches for this- they could include traffic light systems, ‘essential, optional, prohibited’ rules etc. There is no right way- but simplicity and clarity are key.
Clarity is important for all students, but especially for dyslexic and neurodiverse students. In a study surveying disabled students, it was reported dyslexic and neurodiverse students found ambiguous, unembedded policies harder to find and interpret. That said, despite the techno-solutionism surrounding AI and disability- we shouldn’t simply assume AI is automatically inclusive. Low or no tech solutions can often be just as, if not more inclusive. It’s worth remembering that disabled student voice should always be embedded into AI policy making.
Positioning not Progression. Engaging not Embracing.
Language matters. We often frame digital transformation as a linear journey- ban to use, use to embrace. With this, newfound ‘cautious engagement’ with AI, can feel like the start of a slippery slope to a full-scale ‘embrace’. With institutions partnering with AI providers, those concerns are not completely unfounded! Yet the truth is, GenAI is still evolving- and to many of the critical questions, we still don’t have answers.
As GenAI evolves, our positions to it should too. Just as it’s unhelpful to position exam halls as the valueless ‘past’, we should not conceive all out AI-embrace as the only future. We should always remain critical to it, and position ourselves actively responsive to the realities it presents. With this, positioning rather than progression/timelines helps articulate this ongoing dynamism, as does being critical of using affectionate words like ‘embracing’ AI in policy documents.
Saying No ≠ Luddite
With all change, there are sceptics. And AI in higher education gets its fair share. But we shouldn’t dismiss those that say no to AI, off the bat.
Institutions should be empowering staff to be AI literate- but not necessarily AI converts. If our is to enable responsible AI engagement, then we need to accept that deciding not to use AI in an assessment is a valid choice too. This could be on conscientious grounds, or simply because they think the learning outcomes are best demonstrated without it. This is possible, and we shouldn’t be ridiculing those who decide this. The most important mission for universities it to encourage staff to engage meaningfully and responsibly with AI literacy, to help staff decide for themselves how to use it (or not!).
Finding Space for Learning, and Embracing the Fury
As institutions race to release frameworks, policies and guidance notes, perhaps the most important task is create space for staff to learn about AI. Staff AI literacy empowers contextual, principled decision making, which enables staff to find their own approach. There are so many ways institutions are doing this – from workshops, to roadshows, townhalls, ‘lets talk’ sessions, and ‘lets explore’ sessions, best practice ‘banks’, and example use cases.
Where staff-upskilling works best is when initiatives know their audience, and their base level competency. This has, in the main, worked out at most institutions to be developed into workflows; one pitched for ‘bare essential’ AI understanding, and the other for ‘developing AI in practice’.
Importantly, it’s not the form that matters here- it’s the tone of the sessions. They need to be practical, engaging, light-hearted, and specific. Anecdotal reflections on those who have run these sessions highlight a number of key points.
Firstly, be prepared that often these workshops will be the first time participants have been invited to use or reflect on AI formally. When there is no prior ‘formal’ training available, you will get a really varied audience. Some will have advanced knowledge, whilst others with need key terms explained and myths busted. Don’t skimp on this.
Secondly, it will be a bit of a free-for-all, as people are hungry for training. Novices will attend advanced sessions, administrative staff will attend educational practice sessions etc. Even if you clearly signpost intended audiences for key sessions, training a variety of people will just turn up. Be prepared for this in planning them.
Thirdly, training sessions appear to work best in-person, where humans (not the AI) are centred. Recorded resources are fine (and should be there to complement in-person!), but empathetic and compassion-centred work requires coming together, sharing honest reflections, and sharing worries and failures as much as best practice. The most important thing is they create space for open and honest discussion about what we know, and perhaps more importantly, what we don’t!
Lastly, people have a lot of AI feelings- worry, concern, and anxiety. This is on top of colleagues navigating contemporary HE- with time poverty, shrinking budgets, cost cutting, and institutional re-organisation. People are ready to explode!
It just might mean that your AI training session is the straw that broke the camels back. Sit with that. Allow space for frustration in your programmes, and time for people to share their views and feelings. UCL for example, run a Rage Against the Machine session, as part of their staff AI Literacy programme. And its valuable.
It not only formally holds space to listen to AI concerns, but it importantly means that whereas frustration previously chuntered at the side-lines of most sessions, adding to their disorder, it now found home and was centred within a purposeful session, enabling programmes to respond.
Ultimately…
There’s no one-size-fits-all solution to AI in higher education—no universal policy, no definitive roadmap. What matters is that institutions remain agile, reflective, and courageous. Leadership doesn’t need to have every answer, but it does need to create the conditions where staff feel empowered to explore, innovate, and learn from failure without fear.
Most importantly, we must acknowledge the full spectrum of responses AI provokes—excitement, uncertainty, scepticism, even resistance. Because leading through this moment isn’t about adopting AI for its own sake; it’s about shaping a future where human values, critical inquiry, and educational purpose remain at the centre.
Thanks so much to all the contributors and organisers of the session, and especially to Jamie Cawthra (UCL), Richard Fletcher (NTU), Emma Scanlan (Canterbury Christ Church), Leanne Fritton and Alicia Owen (Manchester Metropolitan), and Chiara Alfano (Northeastern University London), for their particular contributions which inspired this blog!


