“ As the College continues to adapt its curriculum to keep pace with advances in AI, our focus is also increasingly on equipping students with an ethical lens through which to view its application.”
Science Fiction
I was raised on a steady diet of science fiction short stories. Good sci-fi is made from a mix of ingredients, including wonder, hope and the suspension of disbelief. However, the strongest flavour is usually fear. Dystopian tales painting a bleak picture of future worlds in which technology takes over. My earliest introduction to the concept of machines running amok was the softly spoken HAL 9000, the onboard computer in 2001: A Space Odyssey. When an astronaut working outside to repair the spaceship instructs HAL to open the pod-bay doors to let him back in, he is met with the apologetic response: “I’m afraid I can’t do that, Dave.” Perhaps for the first time, audiences around the world were confronted with the chilling possibility that the devices we built might not always do our bidding.
Jump forward 20 years, and The Terminator landed on Earth. No longer were the machines simply refusing to do what they were told, they were now the ones doing the telling. Usually, robots in the form of large muscular men (although ironically, even when they revealed their true selves they were still built in the image of the human beings they supposedly despised). This time, they weren’t locking us out, they were hunting us down and eradicating us with futuristic weapons we hadn’t dreamed of. Yet.
When the 21st century arrived, and real tech actually began to resemble some of those worrying predictions of the past, screenwriters had to dig even deeper into the recesses of their dark imaginations. The Black Mirror TV series offered up a new era of future horrors to haunt us. Killer mechanical guard dogs gone rogue. Single-minded and solar-powered, endlessly pursuing hapless passers-by to the death. Or a world in which, through the implantation of a device called a “grainˮ behind their ear, people’s innermost thoughts and memories could be replayed on screens for others to judge. Or autonomous drone insects, tiny robotic bees designed to pollinate crops but instead accidentally tuning into social media posts and then attacking people who were being trolled. Suddenly, arguing with HAL’s gentle voice seemed a bit tame.
Yet, although dominant, fear is only one part of the sci-fi recipe. As novelists dream of what may one day come to pass, they are equally capable of imagining a better world, often a miraculous one. The prospect of annihilation or eternal slavery in the matrix keeps a storyline going, but writers of future fiction also offer glimpses of a world free from suffering. One where complex mechanisms bring equality and enhance human potential. Various Android options have been a popular way to showcase the benefits of the integration between humans and machines. I grew up watching the bionic heroics of Steve Austin, the Six Million Dollar Man (that was a lot of money back then, ok?) Then along came C-3PO and R2-D2, setting the standard for helpful robotic companions. Even Arnie, The Terminator, came right in the end.
Science Fact
So far, so fictional. Here now are two short stories from the current century. The only difference being that both are true tales. The first is of the thousands of women who may not be walking the Earth today were it not for Google. Four years ago, researchers from Imperial College London trained Google’s artificial intelligence algorithm ‘DeepMind’ to identify breast cancer by identifying abnormalities in X-rays. Reading scans like this is not new; doctors have been doing it for decades, but sadly they sometimes get it wrong. Yet in a study of images from nearly 29,000 women, the AI system consistently outperformed the humans, both in avoiding false positives (where a mammogram is wrongly diagnosed as abnormal), and in spotting false negatives (where a cancer was missed by the doctor). The AI beat the radiologists by 5.7% and 9.4% respectively. To put that in human terms, that is over 1,600 women who weren’t put through unnecessary anguish and medical interventions, and 2,700 who may be alive today because a tumour was detected and treated. All thanks to AI.
In those same years, war has raged in Ukraine. When it first broke out, airborne drones played a fairly small part in the fighting, often controlled by amateur volunteers rather than soldiers. Today, drones are the central and most deadly weapon in the conflict. Both sides have them, large and small. Some watch for movement, others deliver a deadly attack. Jamming technology has rapidly been developed to disrupt the radio signals that control the drones. However, the one the Ukrainians fear the most is the Russian Lancet. Not because of its payload, but because it flies completely autonomously, meaning there is no signal to jam. Once launched, its onboard AI takes over and it simply keeps flying until it finds a target and attacks it. The Black Mirror screenwriters were prescient.
What Happens in the Sequel?
So fiction becomes fact, and we now live in a world where lives can be saved or ended without human intervention. And that is just because of early artificial intelligence. What happens next, as Generative AI emerges, whereby computer programs don’t just act autonomously, they also learn and get better with every task? What do the past 50 years suggest we should be teaching our students about this new capacity, as they prepare for the next 50? To fear it? To embrace it? To accept the inevitability of being swept along by scientific advances they can’t hope to fathom? Or to get a degree in coding and make a fortune as the architects of the next generation of computing?
Perhaps a Blend of Them All?
One thing is certain; young people need to be armed with enough understanding to be able to spot AI in action. In our classrooms, we teach them how to sort fact from fiction. To know when they are being manipulated by impersonal software. To beat bots and forego fakes. We encourage them not to be afraid of the hype and hysteria about AI, but also to recognise and continually question its presence in their lives.
And as the College continues to adapt its curriculum to keep pace with advances in Generative AI, our focus is also increasingly on equipping students with an ethical lens through which to view its application. Their generation will live in a world of intelligent implants and bionic body parts. They will judge and be judged by algorithms. Be assisted, but also potentially be abused, by autonomous agents. Many of our students will grow up to be the authors of these new technologies, or the wielders of its power. Things will turn ugly if it’s not controlled, but that’s not a reason to abolish it. The world will be a better place if it is harnessed.
Today’s children are still the luckiest to have ever lived, and science offers them better still in the years ahead. In their hands will lie the deployment of technologies that my generation only ever marvelled at on the big screen. Whilst it may still be fanciful to think technology will ever become sentient and destroy them, there is a very real prospect that uncontrolled applications may lock some of them out of a brighter future. To avoid that, human learning needs to keep pace with machine learning.
By Mr Peter Clague, Principal
*Previously published in the September 2024 edition of Network, the magazine of the St Leonard’s College community.