Opportunities for an Updated DCF to Develop "Ethical and Informed Citizens"


I’ve been reflecting on the phrase, “ethical and informed citizens” which is one of the Four Purposes from the Curriculum for Wales. The reason for my reflecting is that I’ve been using this phrase in relation to my current thinking around critical AI literacy. As you may have gathered from my previous posts, I believe that the need for critical AI literacy for both learners and teachers is very important. If learners and teachers are using or going to use these tools, then they are doing it with “their eyes open”, having at the very least, a basic understanding of how these generative AI systems work, along with a wider understanding of the impact these tools have on society, the environment and intellect. In doing so, hopefully helping users become those "ethical and informed citizens".

According to Dr Sam Illingworth, Critical AI literacy is different to AI literacy. He sees the differences as,
“AI literacy teaches you how to use AI tools: prompting, workflows, integration. Critical AI literacy teaches you how to think about AI tools: what they cost, who they serve, what they change in you, and when to refuse them. AI literacy is a technical competence. Critical AI literacy is a judgement practice.”
Sam believes that AI literacy is a technical competence, in other words, the user knows how to use this digital tool, hopefully in an appropriate and effective way. Whereas critical AI literacy is a judgment from the user based on a much wider knowledge and understanding of this tool, such as how these tools work and the impacts on society, the environment and intellect. The section of the above quote that really interests me is, “and when to refuse them.” As far as I can see, that statement or something similar, doesn't appear to be mentioned in many of the generative AI documents available in Hwb, for instance. In my opinion, they do seem to highlight some of the issues, for example bias, but then will tend to focus on the responsible use of AI or balanced and considered approaches to use in school, but don't go so far as to explicitly mention the possibility that users may refuse to use them. In a recent post, I referred to an article in the Tech Policy Press about the UK government latest announcement about AI training for all. The authors felt that the courses being offered via a new AI Skills hub, “are meant to train people to be better workers and better consumers," but that what seems to be absent is any critical look at AI including "whether they should use AI at all.

I realise I’m going to be pedantic here, but I would personally rather the phrase, “informed and ethical citizens” instead of "ethical and informed citizens". The reason being, that I believe in order to make ethical decisions, you first need to be informed, to understand what the issues are. How can you make ethical choices, if you don't know what you don't know? In this instance, you need knowledge and understanding about generative AI. Being fully informed about the good and the bad, so that you can then make a personal, ethical decision about how and when to use the tool, or even make the decision to limit your usage or not to use AI at all. Therefore, I believe that critical AI literacy is essential to helping to develop "ethical and informed citizens."

It will be interesting to see how the updated digital competence framework (DCF) will approach this. Estyn recommended that the updated DCF should,
"incorporate AI-related digital literacy, including critical evaluation, ethical understanding and developmentally appropriate guidance for pupils." 
You can see mentioned, AI-related digital literacy, along with critical evaluation and ethical understanding. Based on the quote from Dr. Sam Illingworth above, it could possibly mean that there is an updating of the 'Producing' strand of the DCF to include the use of AI to generate text, image, etc.,  along with a critical evaluation of the output. This would basically relate to the AI-related digital literacy 'stuff'.  While the ethical understanding could possibly be added into the 'Citizenship' strand? I'm obviously guessing, I have no idea what the update will look like, but that structure would make sense to me. It's positive that critical evaluation and ethical understanding are recommended by Estyn. But, they actually went further and 'fleshed out' their AI related recommendations to schools, saying that schools should,
  • implement the requirements of the DCF to teach pupils the risks, challenges and benefits of AI in education and society.
  • Use AI tools in teaching and learning only where there is clear evidence of a positive impact on pupils’ progress and well-being.
  • Ensure that pupils develop an understanding of the importance of referencing AI use, its impact on academic integrity, and its potential to limit critical thinking when misused.
In the first and third points we have Estyn recommending that schools implement the DCF which should cover the risks, challenges and benefits of AI, in other words, the good and bad, and the wider societal impact of AI, not just in school. Also, that the learner should be aware of the possible negative impact to learning and their critical thinking skills (cognitive offloading). So, definitely critical AI literacy aspects are being covered there. 

The second point about only using AI in teaching and learning where "there is clear evidence of a positive impact on pupils' progress and well-being" is fascinating and it will be interesting to see what clear evidence is going to be produced to prove this one way or another. Thinking back through all my years providing advice, support and training to schools in using digital technologies, proving that digital tech has directly had a positive impact on pupils' progress and well-being has been difficult, to say the least. The words 'improving pupil motivation' and 'engagement' have often been banded about as the reasons for using digital technologies in the classroom, so it will interesting to see any evidence that AI can offer provide something different to schools and the learners. Have a look at the Education Endowment Foundation's (EEF) guidance report, 'Using Digital Technology to According to Improve Learning', to see what they recommend schools should consider before implementing new digital technologies into the classroom. I particularly like this quote from them:
"New technology can often appear exciting. However, it can become a solution in search of a problem unless it is introduced in response to an identified need."
It would be hard to argue that many teachers and learners will currently view generative AI as exciting, it's quite possibly the new, shiny thing to seduce schools. With regards to an "identified need", I wonder what schools or teachers would say is the need for generative AI? What does it help them in doing that they weren't able to do before, other than just doing something quicker? Interestingly that the EEF report does mention pupil engagement and motivation, but note that the "relationship between technology, motivation, and achievement is complex," something that needs to be thought about when schools for example are looking for that "clear evidence of a positive impact on pupils’ progress and well-being," as mentioned by Estyn. This paper (2025), which was a review of academic papers on AI in primary and secondary schools, seems to suggest that AI is lightening teachers’ workloads, personalising learning, broadening access, enriching educational experiences and yes, it does also mention greater student engagement and motivation! Actually, these results were not too dissimilar to the survey results in Estyn's AI report. All of which I would argue have been used as reasons to use digital technologies in schools for many years. It will be interesting to see if generative AI will bring something new and positive to the table for schools.

So, in my opinion it does look like the Estyn recommendations for the updated DCF do seem to provide the potential for schools to address critical AI literacy with their learners. If the Welsh Government really do take on board what Estyn have proposed then it may help to develop those ethical and informed citizens, who are in the position to question their use of or whether they should even use AI all. But, I'll wait and see how these recommendations are interpreted when they are eventually published before getting too far ahead of myself. 

I'll finish with Dr Sam Illingworth's working definition of critical AI literacy, which I think is as relevant to all users of AI, teachers and learners.
Critical AI literacy is the ability to:
  1. Evaluate AI outputs for bias, error, and missing context.
  2. Recognise how AI shapes your thinking, your voice, and your behaviour over time.
  3. Assess when AI helps and when it harms, and how to tell the difference.
  4. Understand the costs behind AI systems: the labour, the ecological footprint, the social consequences.
  5. Make deliberate choices about when to use AI and when to refuse it.

Comments

Popular Posts