AI will eat your English Department #
And Humanities will be for pudding.
TL;DR: The Times has been good enough to publish my letter on this topic - so just read that if you want. Otherwise: given the infinite space afforded by a personal blog, I thought it worth elaborating my thinking here.**
Given all the problems faced by British head teachers - the yawning gulf between pay and inflation, opaque inspection frameworks, cratering teacher training and retention rates - one could forgive them for choosing to focus on the current fire-fighting, rather than the looming pall of smoke on the horizon.
Which is why it's even more salutary to see them turn their attention briefly - albeit in the form of an advisory body - to the question of AI in education. From The Times:
A coalition of leaders of some of the country's top schools have warned of the "very real and present hazards and dangers" being presented by AI [i.e. Large Language Models]…
Head teachers' fears go beyond AI's potential to aid cheating, encompassing the impact on children's mental and physical health and even the future of the teaching profession…
There is a lot to praise here, not least the use of the present tense. Anyone who thinks that ChatGPT isn't already being used by students in their classroom is either naive or in denial; it's gratifying that even the grand old man of education Sir Anthony Seldon recognises that this is something happening now, rather than something that can wait 'til the next King's Speech.
But one line stood out to me (not mentioned in the coalition's original letter, but elaborated in the subsequent Times article):
The group will create a website led by heads of science or digital at 15 state and private schools. it will offer guidance on the latest developments in AI and what schools should use and avoid.
The natural question to ask is: why these subjects specifically? I've not spoken with the coalition, and have little more to go on than a letter, two Times articles and some cursory Googling, but if the threat is as serious as they suggest (and it is), it is a valid question to ask.
Dancing on the razor's edge #
Perhaps I am being reductive, but there are two possible trains of thought from the coalition in focussing on science and digital.
The first train of thought perhaps goes something like this:
- LLMs are a new technology.
- Science and Digital are the subjects that produce the engineers and scientists that build these technologies.
- Q.E.D our subject specialists in those areas ought to be best placed to understand these new technologies.
I have worked with some exceptional heads of science in my career: teachers capable of making the most tedious lesson on covalent bonds into a whirlwind of electron excitement. But the skill set needed to design and deliver a science curriculum has almost zero cross-over with an understanding of cutting-edge Large Language Model research. Show me the part of the physics spec that asks a teacher to explain neural networks, or the chemistry module that analyses Tensor chip fabrication processes.
You may stand a little more luck if you're a small school, with a head of 'digital' who doubles as head of IT, but even then: do you seriously expect them to have built a working understanding of these brand new tools, whilst still trying to fend off waves of ransomware attacks on your Windows NT-powered school mainframe?
"Have you considered becoming an Apple-accredited school?"
The truth is, these bots are so far outside our curricula that to expect anyone on your staff to have existing expertise in them is absurd, let alone your 'head of science or digital' (and let's be honest, if they were really into LLMs, do you think they'd still be hanging around on a U1 payscale? Perhaps they'd just be writing tedious blogposts instead).
Don't let the truth get in the way of a good tokenized response #
The second train of thought is perhaps this: Science and digital heads have subjects most closely adjacent to AI, so they will likely be the ones that see their subjects affected the most.
But to think this way is to miss the whole point of ChatGPT and other chatbots. They are designed to write prose, not formulae or scientific analyses. Precision, accuracy, and truthfulness are not the priority here, fluency is.
It's hard to wrap one's head around this, because it runs contrary to the way computers are supposed to work. For decades, we've known them as glorified calculators, capable of exact detail, but not the nuances of human speech and art. Machines for truth, not beauty.
Now, for the first time, we have a technology that does the very opposite. This is not a gradually, iterative change to computing, but a radical and fundamental, paradigmatic shift. You may access ChatGPT from your browser, and type into it like you would Google, but it is a wholly different technology with a wholly different purpose.
As I've put it more succinctly here:
What makes these new large language models such as ChatGPT so exceptional is their ability to produce fluent and plausible prose: they are far better suited to writing essays on Coleridge than calculating covalent bonds. It is the literature student who reaches for AI first, not the chemist.
With all this in mind, the decision to focus on Science and Digital departments suggests a lack of understanding of the nature of the technology, or indeed the dangers posed.
Tidy prose, tidy mind #
So who should be involved?
Well, I am a little biased, but if prose is the name of the game, then English (and our cousins in the Humanities) ought to be top of the list. Implicitly, we use the requirement of prose as a proxy for thought. Fluent essays aren't rewarded just because they sound nice, but because they act as a symptom of organised, complex thinking.
English Literature and Language are, perhaps, most at risk here. Ours is the subject least rooted in the concrete and factual (beyond plot-points and grammatical rules), and most centred around nebulous and subjective ideas of style and structure. The best English students are those that read the most, and the bots have read everything.
We now have a technology that can easily surmount the once wholly-human task of producing a passable essay; not simply paraphrasing an existing one, but making something entirely new and untraceable. The barrier to entry for machines has been comprehensively vaulted over. How we re-establish that barrier, how (or if) we build a new proxy mechanism for measuring complexity of thought is another matter, and one the margins here won't contain.
But in the immediate: speak to your heads of English, Humanities and Arts. Either they're already nervous of what's to come, or they ought to be. They will be on the front-line of this LLM-powered revolution, so help them build their barricades.