Hello, I’m Julie Gould and this is Working Scientist, the Nature Careers podcast. This is our second series of 2019 and we’re looking at technology.
Now, the first episode in this six-part series is going to look a little bit at artificial intelligence and its impact on science, careers and universities.
But before we go into any of that, I would like to introduce Jeff Perkel. Jeff Perkel is the technology editor at Nature, and he is very interested in technology and how it impacts science.
Jeff Perkel, you’re the technology editor here at Nature. Thank you so much for coming in and being a part of our technology series. I’m really looking forward to this series because I mean science wouldn’t really happen without technology, would it? I mean there wouldn’t be any research without it.
Technology is basically the way science is done, so to me it’s always been the most interesting aspect of research, or one of the most interesting aspects of scientific research is kind of how the science itself is actually done.
So, in this series, we’re going to be exploring several different aspects of technology, but one of the ones that I’m really interesting in is how technology changes so rapidly that often it’s quite difficult to keep up with how this technology is changing and the impact it has on science and the impact that it has on the careers of the scientists because you could learn how to use one piece of kit or one piece of software during your PhD, but then by the time you graduate it’s obsolete or it’s on to the next level or who knows what happened to it.
So, I find it quite an interesting balance between trying to keep up with the science and keeping up with the technology whilst also trying to keep up with everything else in the world.
I mean that’s definitely a truism of science and it’s probably a truism of the world, and I think probably the answer to that is that – well, there’s no one answer – but I think the answer to that is that scientists need evermore frequently to be flexible.
In a sense, doing science is learning how to do the process of science and if you can use it with this piece of kit, then presumably you can adapt to using that piece of kit. It’s learning the process and the actual hardware or the specific reagents or materials or laboratory equipment that you need to make that happen can vary but the process and the philosophies of what you’re doing sort of stay the same.
We’re going to be talking a little bit about artificial intelligence.
Now, I know that deep learning is something that you have worked on with the Nature article, so tell me a little bit more about the artificial intelligence and deep learning article that you worked on.
So, we had a technology feature on deep learning for biology in our February 20th issue of 2018, and it was written by freelance science writer Sarah Webb, and basically she was talking about how deep learning is a variety of artificial intelligence and machine learning and basically it’s become incredibly popular in life science research, and probably in other areas of research as well, but in the life sciences it’s become incredibly popular as a way to find patterns that would be difficult for humans to recognise on their own in images, in gene sequences, and so Sarah looked into how these tools are basically being used to answer questions that otherwise we might have been be able to think about just a couple of years ago.
We’ll hear from Jeff in the other parts of this technology series but for this particular episode, let’s look at artificial intelligence. So, what is it? What is artificial intelligence? Well, you could think about it as a combination of computer systems that exhibit various characteristics of human intelligence – the abilities to think and search and learn and turn into action.
It’s an accumulation of the internet of things, cloud computing, large-scale data processing, and all these things have come together at the same time to form artificial intelligence.
Now, in August 2017, Mark Dodgson from the University of Queensland and David Gann from Imperial College wrote an article for the World Economic Forum about artificial intelligence and the impact it’s going to have on universities, and they said: “We believe that AI is a new scientific infrastructure for research and learning that universities will need to embrace and lead, otherwise they will become increasingly irrelevant and eventually redundant. Through their own brilliant discoveries, universities have sown the seeds of their own disruption. How they respond to this AI revolution will profoundly reshape science, innovation, education and society itself.”
When he was in London last year, I spoke to Mark Dodgson and asked him how he felt about that article and particularly the statement I just quoted, and whether or not he thought it was still as true now as it was then.
Well, universities are the source of a lot of the science and the technology behind artificial intelligence so, in a sense, they’re the source of their own disruption.
And I stand by the idea that AI, in the combination of the various technologies that comprise it, will provide a new scientific infrastructure and has got the considerable opportunity to disrupt existing university activities.
So, for example, one of the intents of artificial intelligence, one of the objectives of artificial intelligence, is the notion of deep learning. Now, if universities aren’t involved with deep learning, I don’t know what is, so if you’ve got these combinations of technologies and algorithms that can undertake the kind of deep learning that universities have usually in the past have had a monopoly on then it’s going to provide a considerable challenge to universities.
And AI affects all activities across universities, so it affects the teaching activity, it affects research, it affects the external engagement and the internal management of the university, and I think unless universities get pretty coherent strategies to deal with this technology across the range of activities, they will struggle because there are plenty of commercial organisations nowadays that are capable of using these technologies to be able to provide the sort of services that universities have provided in the past.
I’ve heard someone say to me once that universities are very much like the Catholic Church in that they are very reluctant to reform, to change.
They are very traditional in the way that they do whatever it is that they do. How much of a hindrance do you think that is going to be when it comes to accepting and taking on board these new technologies like artificial intelligence.
Well, they’re great survivors. The average age of the top 20 European universities is over 340 years, and 7 of the 20 are over 400 years old, so they’ve been around for a long time. But I think the combination of these new technologies is providing a significant threat. There will always be places where you need creativity, but I’m not sure if the universities, in my experience, are really coming to terms with the extent of the challenge that they face. Some are doing very well. Some are using these technologies to improve their efficiencies in teaching, for example, or to improve the outcomes of their research, but I think we’re on the cusp of a significant disruption in higher education and I’m just not sure how universities are prepared for it.
So, let’s talk a little bit about the universities that are using these technologies for their teaching and for their research. So, can you give me some examples of how artificial intelligence is being used in those respects?
There’s experiments taking place in universities such as Georgia State or the Norwegian Business School which are using these kinds of virtual tutors to improve learning outcomes, and in some universities, such as Georgia State and Georgia Tech, you’re using AI to predict student performance so that you can predict whether students are going to fail and have an intervention before they actually do fail. In research, anything that’s repetitive and simply done, and there are examples of AI systems creating hypotheses and undertaking whole scientific experiments, and AI also provides the opportunity for greater interdisciplinarity.
It allows people to be able to share datasets and learn, see new observations, see new connections through combining different insights from different disciplines, so in the research field there’s significant opportunities.
A lot of people could well be thinking well, hey, wait a second, a lot of my work is all of these things that you’ve just said – what am I going to do?
Why am I spending my time doing a PhD in subject ABC when in the future there’s going to be some artificial intelligence software that could do it too?
Yeah, well I think the challenge is going to be developing careers where people can work alongside AI, use it as a tool to augment what they’re doing and to accentuate the creative and collaborative elements of what they do, more high-value-added activities if you like. So, theory becomes ever more important – being able to distinguish what’s real and what’s fake, what’s misleading, what’s based on fundamental understanding. I think those kinds of tasks are going to become more central in academic work and in training academics, thinking about educating academics in the future.
How do you recommend that we prepare the PhD researchers at the moment so that when they do finish their PhD research that they are ready to embrace that technology and use it and work alongside it when they go into an academic career, for example?
I think that’s a question that many universities are struggling with at the moment and need to really get their heads around. One of the problems with PhD training is that there’s so much pressure on just focusing on the minutia of the particular project that they’re working on.
I think there needs to be made space in the PhD programme for people to reflect more on the context in which they’re learning, the social obligations they have as scientists and thinking about the consequences of the science that they produce.
What would there be instead if universities don’t keep up with the technologies that they themselves have created?
I think already you’re seeing – through the use of these technologies – the growth of commercial providers of education, the massively online courses and so forth. You see extremely wealthy philanthropists creating their own research institutes, trying to avoid some of the legacies of institutional bureaucracy that you find in many universities, so companies themselves investing in science.
So there are alternatives for research and for teaching that are already there and I think that unless universities get up to speed and start addressing some of these competitive threats and using some of their historical, cultural strengths of collegiality, of inquisitiveness, curiosity, unless they accentuate that and get rid of some of the obstacles that they continually put in the way of innovation and entrepreneurship then they will suffer because the innovation and the entrepreneurship that is facilitated by these new technologies will be used by other parties and universities will be left behind.
Now, there are some researchers that disagree and don’t think that artificial intelligence will make such a big difference and Lee Cronin, who’s the Regius Professor of Chemistry at the University of Glasgow, is one of them.
The thing is that AI is a tool, so I guess AI might have a similar impact as a typewriter in universities, and the personal computer. But there’s no magic in AI – it’s just a tool. It’s just a series of mathematical processes to allow you to extract meaning or at least some degree of prediction from large datasets. Then I guess the big question is how much meaning can you put into this?
And that’s quite complicated because if I was to investigate the world and write down Newton’s law of gravity, I could do that. If I was to train an AI to tell me Newton’s law of gravity, well, it would do that but in a more complicated way and I might not understand what gravity is from the AI, so I think there is serious question marks about overusing mathematical techniques to black-box a problem and not actually understand the causation.
And if you don’t understand the causation then poorly deployed machine learning will actually prevent you from understanding the universe, and of course, what universities want to do is they want to help us further our understanding of the universe. So, I think machine learning and artificial intelligence in general will be great new tools as they become accessible to people outside of computer science and applied mathematics, but they’re not really going to make a massive difference.
There have been a few people who have said to me that the way artificial intelligence will change universities is that they will change the way teaching is done, they will change the way that grading is done, they’ll be able to predict when students are going to drop out based on results on previous tests. Do you not think that that kind of… no?
No, I mean I think that’s nonsense. I’ll tell you why it’s nonsense, it’s that it’s a technique you can use to help do all sorts of things but it’s no panaceas. I think we want to use AI as a tool to basically make grading cheaper then fine, do that, but universities aren’t about grading. Universities are about educating people to think critically, about preparing people with sufficient, high-level skills to add to the economy and be creative, but the thing that I really love about our universities, particularly in the UK, is the creativity that’s there, and there’s no AI that can assess creativity, there just isn’t. Anyone that tells you that there is is making it up.
So, thank you to Mark Dodgson from the University of Queensland in Australia and Lee Cronin from the University of Glasgow in Scotland. In the next part of this series, we’re going to be looking at the importance of learning how to code and the impact that it can have on a research career. Here’s just a little snippet from an interview I did with Brian MacNamee who is part of the Insight Centre for Data Analytics at University College Dublin.
I think it’s a really good idea for people to learn to code and I think it’s a really good idea because it gives them the biggest freedom and flexibility, so now they don’t need anyone else in order to do whatever it is that they want to do, so it puts them right in touch with their data and it puts them right in touch with an enormous range of packages that will allow them to do different kinds of data analysis, work with different kinds of datasets, produce different kinds of outputs from the analysis on those datasets.
Thanks for listening. I’m Julie Gould.