Throughout this year’s Lecture Series, regardless of the subject focus, there’s been a hovering shadow-topic. In talks ranging from mathematical modelling to women’s representation in history, artificial intelligence has been the omnipresent elephant in the room. So it seems fitting that the final talk of the series will address AI head on, at a moment when, as the speaker herself puts it, “the trajectory has shifted, very suddenly, from three months ago.”
Professor Shannon Vallor is Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the University of Edinburgh, where she also holds the post of Director of the Centre for Technomoral Futures. As a Philosopher, she has a longstanding interest in the relationship between technology and humanity, and its impact on our wellbeing. But she’s keen to dispel any suggestion that this comes from a place of instinctive mistrust.
“Because my focus is on trying to prevent harmful outcomes from AI, it’s easy for people to come to the conclusion that I’m somehow hostile to AI, or anti-technology. I grew up a giant nerd, obsessed with computers. I was always fascinated with robots, with the possibility of AI, and was always interested in the nature of consciousness and human experience.”
She initially focused her research on the nature of human perception and reasoning, responding to social media’s impact on our relationships as an emergent technology in 2006, and working on the ethics of robotics in 2010.
“Robots were far more promising an area of innovation in 2010 than AI,” she recalls. “AI had kind of been stalled for a bit – we were making advances, but there was nothing on the market that would blow your mind.”
Robotics, by contrast, looked set to make significant changes to our day-to-day lives.
“There was an anticipation that we would start seeing, not only driverless cars, potentially quickly eliminating human drivers’ jobs and changing the way our roads and transportation habits work; there was anticipation of robots moving into care homes and taking on all kinds of roles in society. So I started working on the ethical considerations that were relevant to those anticipated transformations.”
However, as the advent of commercially viable, safe, useful applications for robots seemed to slow down, AI rapidly filled the gap.
“Around 2013, 2014, 2015, we started to see this take off in machine learning techniques and deep learning that was enabling new kinds of problems to be solved by AI, that people had thought were nowhere near being solved,” Shannon explains.
“My work has kind of followed the track of where the advances have gone. Because where the advances are, is where the applications and the deployment become realities. And then it becomes much more urgent as an ethicist to develop or inform relevant policy responses, as well as industry responses, because the dangers are suddenly not hypothetical, they’re already out there in the world.”
The AI reshaping the world in 2025 is not the approximation of human thought processes, far less consciousness, that used to be implied by the term. Instead, we have task-focused machines which apply cognitive approaches to very specific demands.
“That has changed the landscape radically,” says Shannon.
“It is far from the sort of AI we thought we’d have at this time. Many computer scientists would question whether it’s really AI at all. But in a way it doesn’t matter, because from a marketing perspective, AI has been redefined, as what these tools do. And what these tools do is very powerful. The fact that they’re not conscious doesn’t mean that they are benign. And so there’s a lot of work to be done.
Over the past decade, philosophers and ethicists have been keeping pace with this rapidly evolving area, working out the risks and responsibilities. The problem now is not how to put the genie back in the bottle, but how to make the case for the necessary guardrails.
“AI ethics is not a new field. It’s not as if we got caught unawares by the rise of AI, and now we’re trying to think about how to govern it. There’s a large body of immediately actionable recommendations. The problem is that there is no political will for it.”
This was demonstrated forcibly at last month’s Paris AI Action Summit, when anticipated commitments to regulatory action and responsible governance completely failed to emerge.
“There was no clear appetite at the summit for thinking about AI governance, in a more serious way. The narrative was full speed ahead, take off the brakes, and let’s see where we go.”
In that context, Shannon’s contribution to the Darwin College Lecture Series will challenge three different framings of code that shape our sense of our own humanity – as followers of rule-based systems; as a species which can be understood only through some advanced cryptographic technique; and as a self-replicating machine which runs and executes a programme.
“The talk is going to focus on these three notions of code that have shaped how we understand ourselves as humans today, and the way that AI has become inextricable from the current configuration of human self-understanding. That is, I think, increasingly people understand themselves through the metaphors that have conditioned our understanding of AI, and of code. I’ll be challenging the audience to think about not only what we’ve gained from these ways of understanding ourselves, but what we’ve lost as well, which ultimately I think is more significant.”
Join us for Shannon’s lecture, Decoding our Humanity, at 5.30pm on Friday 21st February.