A new centre in Cambridge University is exploring not only the exciting possibilities and dangers of emerging artificial intelligence, but our preconceptions of what artificial intelligence might be
From Mary Shelley’s novel Frankenstein published in 1818, to the android character of Ash in the 1979 movie Alien who betrays the crew, to the robot Ava in the 2015 movie Ex Machina, culture has always been effective at crystallizing social anxieties around scientific and technological developments – particularly when it comes to creating human-like intelligence. With Artificial Intelligence and robotics increasingly appearing in the headlines of our news feeds, there is no better time for the Leverhulme Centre for the Future of Intelligence. The Centre is a collaboration led by the University of Cambridge with links to the Oxford Martin School at the University of Oxford, Imperial College London, and the University of California, Berkeley and supported by Cambridge’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH).
The Centre bringing together thinkers from many different fields to explore and imagine what might be the opportunities and challenges to humanity as advances in artificial intelligence develops, also brings a more measured and useful perspective on artificial intelligence. Dr Seán Ó hÉigeartaigh who is Executive Director, Centre for the Study of Existential Risk (CSER) and helped develop the proposal for the new Centre explains how artificial intelligence was very much part of the CSER agenda, ‘in part because there have been so may advances in recent years, certain thresholds have been passed which means that the impact has been really quite substantial and moving quite quickly.’
But they also realised that, ‘focussing simply on the catastrophic risk angle limited us in terms of scope given that there were so many things to consider in the case of artificial intelligence.’ Theproposal foresaw a centre addressing artificial intelligence a similar belief in the necessity of drawing experts from similar disciplines and ‘thinking about not only the long-term, but the near- and long-term impact, and not only the risks but the opportunities and challenges..
NARROWER INTELLIGENCE
While artificial intelligence makes great headlines and often great movies that capture some of our anxieties, the new Centre’s work offers a different tone and set of perspectives that are likely to be more practical. So while stories that scare us about AI often give us a pleasurable shiver, the current operation is relatively limited. As Dr Ó hÉigeartaigh, explains, ‘I just want to clarify that all of the artificial intelligence that we see in the world around us at the moment we could consider narrower intelligence in that they are applications that are extremely good at doing specific tasks – driving round the city, or playing chess, or enabling a search engine. But there is nothing in existence now that measures the kind of general problem-solving or cognitive ability of a dog, never mind a human being.’
This measured, reflective approach enables the Centre to being open and excited by the possibilities of new technology, while being aware of the serious questions it raises. Being able to learn from the world, adapt and figure and do many different things still only exists in biology, reflects Dr Ó hÉigeartaigh. ‘However it is our view that if it has emerged in biology, it’s proof of principle that at some point we will understand intelligence well enough to be able to recreate it.’ Dr Ó hÉigeartaigh’s own background is in computational biology but has spent many years running interdisciplinary programs. So when running the Centre for Existential Risk the questions they were exploring benefitted from the multi-disciplinary thinking of many different perspectives. These big questions concerning the future says Dr Ó hÉigeartaigh are not, ‘the remit of computer science alone to answer or computational biology alone, they need policy expertise, they need economics and legal expertise, they need sociology expertise, they even need philosophical expertise to think about the really long-term big picture questions.’
APPLYING AI
Indeed the diversity of perspectives required is not just part of the intellectual chemistry that produces original thinking and ideas, it’s partly a response to how new technology itself creates knowledge and attracts different kinds of expertise. ‘From my point of view as a scientist, I see many of the challenges that we face being ones where we have to analyse a huge amount of data from a wide range of sources and make sense of incredibly complex interconnecting systems,’ says Dr Ó hÉigeartaigh. ‘That’s a very difficult thing for even teams of human beings to do. The systems we are currently developing are tailor-made for making sense of big data. You can imagine them: being helpful in analysing a million genomes to figure out the basis of cancer; or in analysing different aspects of climate change; or making our energy grid more efficient or solar grids better or smart homes. So many of the challenges we face are going to be aided if we think about how we can apply artificial intelligence to them.’
Entangled with the scientific issues are the socio-political-cultural aspects of accelerated technological change. Indeed, he points out some near-term anxieties around examples of taxi-drivers or long-haul drivers being put out of work by driverless car, but it also may free up time for people to do other things and that’s why discussions need inputs from many different expertises. Equally there are dangers to consider such as the fact that artificial intelligence will allow more sophisticated drones in the near term, but as Dr Ó hÉigeartaigh points out, that’s nothing like an artificial equivalent of human intelligence.
There’s a long history of unsuccessful prediction of a more general artificial intelligence – rather than the narrow one currently operating in various technologies. ‘A betting man might say the current enthusiasm might also be wrong,’ says Dr Ó hÉigeartaigh. ‘But we are also seeing an unprecedented level of investment into this and seeing some exciting projects focussing on more general approaches to intelligence. Even if it is only a 50 percent chance this will happen within this century there should be some people thinking about it and working on it.’ As equally important he points out, even if full AI doesn’t come to pass, the technological advances made will be significant and their social, cultural and political impacts need to be considered, debated and thought-through.
DIFFERENT KINDS OF INTELLIGENCE
Another issue embedded in the popular discourse and debate around Artificial Intelligence is the extent to which we are too anthropocentric in thinking around this, that we need to consider that there are different kinds of intelligences in our world. Dr Ó hÉigeartaigh agrees, from human intelligence to corvid intelligence in ravens, ‘the chances are that not only are we anthropocentric, but we are planet earth-centric. We very should not limit ourselves by anthropocentric intelligence. One of the first projects we describe in our initial phase we call “Kinds of Intelligence”, and we’ve already started to have a couple of pre-project meetings.’ These involve Professor of Neuroscience in Imperial College, Murray Shanahan and experts in bonobo intelligence, mathematical logic, a machine-learning expert. ‘All of these people are thinking about can we come up with some slightly more crisp ideas about different sorts of intelligence capabilities. It is very hard to say what intelligence is, but perhaps a little bit easier to say what intelligence does and work from there.’
EVOLVING INTELLIGENCE
There’s also the question of how such Artificial Intelligence may evolve. Evolutionary biology is enabled by trial-and-error over time and as Dr Ó hÉigeartaigh explains, some organisms have a higher error rate than others allowing them to grow rapidly, while others have a lower tolerance for errors. ‘When it comes to how we design our algorithms and our artificial intelligence, we as programmers have the choice about how we might want to do that. There is a class of machine learning that is called evolutionary algorithms that allows some element of trial-and-error.’ There are reasons why you might want to be open to the variations which emerge, equally he says, ‘there are also reasons why we might not want to do that because it might lead us up a lot of dead ends or it might lead us to consequences we don’t want.’
There are different evolutionary forces at work which also play a part. Breakthroughs in scientific fields leads to a burst of progress which opens up the area, attracting more brain power, more PhD funding, and more resources being pulled into the area. ‘I think we are seeing that at the moment for example in Deep Learning where it’s achieved a lot in its early days and as a result has a whole lot more funding and many more brilliant people applying themselves to it,’ he says.
THE FUTURE
Likewise in terms of AI, it is reasonable to think that there are conceptual breakthroughs to be made – but obviously whether happens in three years or 33 years is entirely unpredictable, and by how much that breakthrough will accelerate advances in the field. ‘That introduces an element of great uncertainty and it’s one of the reasons I think it’s foolish to say we are making such progress we are guaranteed to have general artificial intelligence by 2070,’ he says. But breakthroughs will happen at some point and there needs to be places like the Leverhulme which support people to think inventively and creatively around something which will have significant social effects.
The Centre functions as a hub, pulling in visitors from academia and industry, and also organising schools workshops and conferences. ‘We have a long term ambition to use this as an opportunity to build up a community of people who are thinking about these things to encourage the next generation of thought leaders and research leaders to consider these challenges that effect all of us. I’m confident the brilliant young people who will come through our summer schools and workshops will play a big role in the next generation of industry leaders and policy leaders in this field.’