They Use It With Their Eyes Open


I recently read this perspective piece from Tech Policy Press titled, "The Real Cost of the UK’s ‘Free AI Training for All’ is Democracy" and parts of it really resonated with me. It clearly articulated how I have been feeling about both the UK and Welsh Government's drive to encourage the public and public sector bodies to maximise the use of generative AI technologies. If you missed it, here is also a link to the UK Government's "AI Action Plan: One Year On" policy paper, referenced in the Tech Policy Press piece. It outlines the progress the Government are making on their AI initiative with ambitions that include, making "sure people have the skills and confidence to thrive in an AI-enabled economy, to modernise public services so they work better for citizens, and to build the economic foundations for long-term growth." In the Tech Policy Press piece they focus on one of the areas of the Government's action plan which is the Free AI Training for All. What particularly caught my eye was the authors highlighting the training being offered through the AI Skills Hub. The Hub has received criticism both by them and others as being poorly designed, costing the UK tax payer possibly £4million (it's not clear what exactly the figure is as there is no transparency on this) and that the courses were exclusively from big US organisations, promoting training on their platforms. This point about the big US organisations is something I have some concerns about here in Wales. Will there be input from Microsoft, Google or even Adobe on the professional training being offered to teachers or even on shaping the new descriptions of learning in the updated DCF? I'm sure it would be fair to say that it would be in their interests that both our teachers and learners are maximising their use of Co-Pilot and Gemini for example. As a recent court case involving Google highlighted, these big tech companies see education very much as a “Pipeline of future users.

The authors of this piece also refer to AI literacy and importantly, critical AI literacy. They argue that:

"Building critical AI literacy for all requires accessible and independent materials, beyond a focus on individual companies and tools, alongside a range of opportunities for different communities to engage critically with learning about AI in context: not just how to adopt and use AI.”
Note in that quote the statement "beyond a focus on individual companies and tools". Again, users should have the ability to select what they feel is appropriate to use. Just a question to raise here, if schools in Wales are "encouraged to consider the benefits of adopting the national approach" through using the Hwb platform and therefore most schools (especially primary schools) might only be using generative AI tools via Microsoft, Google and Adobe Express, is this too narrow a selection of tools for schools or individuals to really make a choice about what they want to use? I could talk about government tech solutionism here, school autonomy or even digital sovereignty here, but I'll leave that to another time.

The authors of the Tech Policy Press piece go on to highlight that the courses the UK Government are promoting through the hub, "are meant to train people to be better workers and better consumers," but that what seems to be absent is any critical look at AI including "whether they should use AI at all."

If you have read any of my previous posts you can see that I am certainly viewing generative AI through a  critical lens. I touched on the need for critical AI literacy in this post from November 2025, where I outlined my New Year hopes for 2026 with regards to the updated Digital Competence Framework that will be hopefully published sometime later this year. Here's a reminder of what I wrote:
"(W)hat do I mean when I say, Critical AI Literacy? Well, I would like to see a DCF that is not just focused on how learners should(?) be using generative AI tools appropriately and effectively, but there is also a focus on a deeper, more critical look at the wider ethics around generative AI. That there are opportunities for learners to gain a deeper understanding about what generative AI is, how it's built, trained and the detrimental affects on the environment and on society. I would like to see learners fully informed about this technology so that they can make truly informed ethical decisions about whether or not to use it, or at least make the decision if they wish to use it less."

In fairness, recommendation 3 from the Estyn report states, "Ensure that the curriculum provides pupils with the digital literacy skills to engage ethically and critically with AI," going on to say that the Welsh Government should, "Update the Digital Competence Framework (DCF) to incorporate AI-related digital literacy, including critical evaluation, ethical understanding and developmentally appropriate guidance for pupils." If this is suitably addressed, without any external influence, I believe we could have a very positive addition to the DCF.

I'll finish off with how Dr Sam Illingworth describes critical AI literacy:

"The goal is to make deliberate, informed choices about when and how to use it. People with strong critical AI literacy use AI. They use it with their eyes open."

Using it with their eyes open - absolutely! As I keep on going back to again and again, in education in Wales we are aiming to develop ethical and informed learners, which means learners (and teachers) will need to be looking at the world and what they do within it, with "their eyes open," allowing them to make those ethical and informed decisions, especially within generative AI. Helping them to question where it can be beneficial and appropriate for them, but also leaving them room to question its social, intellectual and environmental impacts and maybe even becoming a resister or avoider. The choice will then be theirs to make and not forced upon them. 🤞 








Comments

Popular Posts