[0:00]I would say about 90% of the breakthroughs that underpin the modern AI industry were done either by Google Brain or Google Research or DeepMind, so one of our groups. The returns are kind of still very substantial, although they're a bit less than they were obviously at the start of all of this scaling. We have amazing guests on the show, but very few honestly will be considered in the same realm as Newton, Turing, Einstein. Our guest today is one of the greatest minds on the planet, and I consider myself incredibly lucky to have had the chance to sit down with him. Those labs that have capability to invent new algorithmic ideas are going to start having bigger advantage over the next few years as the last set of ideas, all the juice has been rung out of them. This is a truly special one, and one that I'll remember for a very long time. I think we could probably get 30-40% more efficiency out of our national grids. Enjoy the episode, and I so appreciate the time we had with a very special human being. I sometimes quantify the coming of AGI as 10 times the Industrial Revolution at 10 times the speed. I'm thrilled to welcome Demis Hassabis at DeepMind. Ready to go?
[1:11]Demis, I'm so excited to be doing this. Thank you so much for joining me today. Great to be here. Now, there are many places that we could have start, but I was watching actually the documentary that you did, which was fantastic. And I actually wanted to start on AGI. Definitions are very varying. You've been very thoughtful about what it means to you. And so I wanted to start, can you explain to me how you think about it today so we get that as a kind of ground center? Yeah, well we've we've always defined, we've been very consistent how we define AGI as basically a system that exhibits all the cognitive capabilities the human mind has. And that's important because the brain is the only existence proof we have that we know of in maybe in the universe that genuine intelligence is possible. So that for me is the bar for what AGI should be. It's the worst question. How close are we? Everyone everyone says different things and it's very difficult when you have, you know, very prominent figures saying it could be as early as 2026, 227. Yeah, I mean I think look I've got a probability distribution around the timings but I would say there's a very good chance of it being within the next five years. So it's not long at all. Is that closer than you thought? Has that changed over time? Not really. I mean actually when you when you uh it's funny um my co-founder Shane Leg who's chief scientist here, um, uh when we started out DeepMind back in 2010, he used to write blog posts sort of predicting about uh when AGI would happen. And bearing in mind in 2010 when we started, almost nobody was working in AI and everyone thought AI basically didn't work. It was it was a dead end. No, and but they're still there on the internet for people to check and um, uh, we used to do this extrapolation of compute and algorithmic uh, progress and basically we predicted around 20 years it would take from when we started out and I think we're pretty much on track. What are the biggest bottlenecks when you look today, you know, I in in the documentary said you just never have enough compute. What are the biggest bottlenecks when you look today? I think compute is the big one, not just for the obvious reason of scaling up uh your ideas and your systems as as, you know, the scaling laws as they're called, you know, keeping on building bigger and bigger uh architectures with more and more parameters. Um and as you do that, you get more intelligent systems. But the other thing you need a lot of compute for is for doing experiments. So, um, the computers, the cloud is our workbench basically. So if you have a new idea, a new algorithmic idea but you want to test it, you kind of got to test it at a reasonable scale, otherwise it won't hold when you actually put it into the main system. So, um, you need quite a lot of compute if you have a lot of researchers with lots of new ideas. You mentioned the word scaling laws. A lot of people suggest that we're hitting scaling laws and we're starting to see that plateauing effect. Yeah. Do you think that's true? No, I don't think so. I think it's a bit more nuanced than that. So, um, of course when uh, uh, the leading companies all started building these large language models, you're getting enormous jumps with each generation of new system. Um, you know, maybe they're almost like doubling in performance. Uh at some point that had to slow down, so it's not kind of continuing to be exponential. But that doesn't mean there isn't great return still for scaling the existing, you know, systems up further. So I would say the returns are kind of, um, still very substantial, although they're a bit less than they were obviously at the start of all of this scaling.
[4:41]Where are we behind where you thought we would be? Um, I think actually in most areas we are ahead of where I thought we would be if you think about things like, um, the video models or, um, even now with our newest systems like Genie, they're interactive world models. Um, which I think is kind of incredible if you sort of step back and think about it. I think if you'd show me that five, 10 years ago, I would have been pretty amazed. Um, so I think in most domains we're we're we're we are ahead of where, um, the field thought. Um, there's still some big things missing though like continual learning. These systems don't learn after you finished training them, after you put them out into the into the world. You know, they're not very good at learning further things and I think some critical capabilities are missing. Why is that? I'm sorry to ask blunt and basic questions. Why do we not have continuous learning today? Um, well people haven't quite figured out yet and all the leading labs are working on this like how to integrate new learning into the existing systems that, you know, you spent months training. Um, so of course the brain does this very elegantly, right? And, um, probably through things like sleep, reinforcement learning. So, you know, you just kind of get consolidation, it's called in the brain where, you know, your memories during the day are replayed and then some of that information is elegantly incorporated into your existing knowledge base. And perhaps we I've thought for a while, maybe we need something like that, uh, to incorporate new information along with, uh, uh, the existing information base. You mentioned video models, you mentioned kind of media and image. It seems that DeepMind has progressed very quickly and caught up/overtaken other providers. I think I've tweeted, I think you liked it, but I basically tweeted, um, what I used and how it's changed over time and DeepMind now is my number one for research for new shows. It wasn't that way before. What has led to the acceleration and progression of DeepMind in a way that it wasn't maybe there two or three years ago? Yeah, well we made some organizational changes, so I think we've always had the deepest and broadest research bench at Google and at DeepMind. I mean if you look at the last decade, uh, or plus, you know, 15 years, I would say about 90% of the breakthroughs that underpin the modern AI industry were done either by Google Brain or uh Google Research or DeepMind, so one of our groups. Uh if you think like AlphaGo and reinforcement learning and of course transformers, you know, these are all the key breakthroughs. So I would back us to sort of um, make those breakthroughs in the future uh if there are any missing ones. Um and I think we've basically helped put together all the talent from around the company sort of pushing in one direction. Um and then we talked earlier just about, you know, compute resources, it was also about combining all of our resources together so we could build the biggest models rather than having two or three versions around the company. So I think a lot of it was assembling together all the ingredients we already had and then kind of pushing with relentless sort of focus and and and pace. Um, acting almost like a startup really, uh, to get back to the the frontier and and be ahead in in many areas. You say if anyone's going to do the breakthrough it could and should be us. When you think about that is continuous learning the next breakthrough that you're most excited by? I think there's quite a few things that are missing. There's there's continual learning. I think there's a lot of, uh, I think a lot of mileage in looking at different memory systems. Um at the moment we have these long context windows which are kind of a bit brute force, you just put everything in them. Um I think there's there's there's a lot of uh uh interesting probably architectures to be invented there. Um and then there's stuff like uh long-term planning, you know, hierarchical planning. These systems are not very good at planning at long time horizons, you know, many years into the future, uh which we is, you know, with our minds we can do. So there's quite a lot of uh uh problems I think that are still left to overcome. Maybe one of the biggest is consistency. So, you know, these I sometimes call these systems jagged intelligences because they're really amazing at certain things, uh, uh, when you pose the question in a certain way, but in if you pose a question in a slightly different way, they can actually still fail at quite elementary things. So a general intelligence shouldn't be that sort of jagged. When you reposition files and you set up agents to perform in certain ways, and then the files. They just fall over. Yeah. 100%. That's a disaster. Yeah, well, I mean, the general intelligence, you know, if you think about how our minds work, it shouldn't have those kinds of holes in it. We said about a plateauing of scaling laws. Everyone talks about a commoditization of models in terms of capabilities. Do you think we see that or do you think we see one or two continuously accelerate ahead of the others? Yeah, I feel like, uh, maybe, you know, the, the, the three or four leading labs now, which we're one. I think the gap is sort of, um, starting to pull away because, uh, a lot of these tools also of course help you build the next generation, so things like coding tools, math tools. And it's getting harder and harder I would say to kind of eke out the same, uh, gains from just the same ideas. So I think those labs that are have capability to, you know, invent new algorithmic ideas are going to start having bigger advantage over the next few years as as the last set of ideas are sort of, um, you know, all the juice has been rung out of them. I'm intrigued, you know, you you were very open with a lot of your research for years and we see many very good quality open models. How do you think about the future of open? I have many portfolio companies that have kind of used frontier models and then they use that to set a benchmark and then they use open models to kind of get as close as possible but with more cost effectiveness. What does that future look like? Yeah, I think it's probably similar to what we're seeing today. I mean, we're we're big supporters of of open science and and open models and we've done many, many things obviously from from the original transformers to to AlphaFold. You know, these are all uh things we sort of given out into the world and to help the the the research community and we plan to continue to do that especially in applied domains, you know, scientific domains, applying AI to science which is obviously my passion. Um but uh uh I think increasingly, uh, you know, what you're going to see is the open source models are probably one step back from the absolute frontier. Um you know, usually it takes about six months for the open source community to sort of re-implement and figure out what those ideas are. Um but we are also, uh, pushing hard on a kind of suite of open source models called Gema, which are, you know, we're determined to kind of make best in class for their sizes. So specifically for small developers or, um, academics, uh, or, or, you know, the beginnings of a startup, I think they're perfect for that and also edge computing too. So we're very interested in open source models for certain types of, um, uh, applications. How do you think about a world post LLMs? You have different people, different views, you have Young Lacoons with very different views. For me, I don't think it's, uh, you know, I kind of disagree with Jan on a few things in terms of, um, I think there might be, there's a 50-50 chance there's some things maybe missing that we still need to make breakthroughs in. Perhaps their world models, um, these kinds of approaches, but my betting is, uh, pretty strongly is we've seen how successful these foundation models have been. They can do incredibly impressive things. I don't think that's going to go away, we're still ceiling seeing, you know, gains from the from returns from the scaling laws. Um, so my I think the only question really is when you think about a future AGI system is, you know, is an LLM foundation model going to be the key component only or is it the total system? Right? So I just think it's, it's a question of, um, uh, you know, is there anything else needed, not is it not I don't think it's going to get replaced. I think it's going to get built on top of these foundation models. When we think about that future five years as you said, potentially AGI, what does that world look like? Many people have different concerns.
[12:39]Well, I think on the positive side, and the things obviously I've I've spent my whole career and life building towards AGI is I think it will be the ultimate tool for science and medicine. So in terms of advancing scientific discovery, um, finding cures to diseases, I think we need that kind of technology and so I hoping um in five years plus time we'll be sort of entering a new golden era, golden age of scientific discovery. So my mother's got multiple sclerosis. So it's something that it's the thing that I'm always most excited about. The thing I worry about is actually kind of drug discovery, the process of getting it through all the trials and knowing that it takes a decade before my mother will actually get any benefits from it. How do we solve that? I think we'll get to that point soon. First of all, what we're doing is, you know, after we did the AlphaFo project to do protein folding, then we spun out a company called Isomorphic Labs, which is doing extremely well. And that is supposed to, you know, the idea there is we're focusing on solving the rest of the drug discovery process, which is a lot of chemistry, designing the compounds, uh checking it's not toxic and all the different properties you need for for drugs to be safe. Um I think we'll have that whole drug design engine ready in, you know, the next five plus five to 10 years. Then you're right. The next problem is the clinical trials still take, uh, many, many years, right? Um and but I think AI can help there in terms of, um, maybe simulating, uh, parts of, uh, uh the human, uh, metabolism, um, also stratifying patients to make sure that certain patients get exactly the right type of drug that's suitable for their, uh, genomic makeup. Um and so I think AI can help there too, but I think the real revolution will come when a few, maybe a dozen or so AI drugs get through the whole process. Uh and then the government and the regulatory bodies see that and they have enough data to sort of, uh, back test the predictions of those models and then maybe what we can do will be in the future where maybe 10 further years where, um, we can really just trust the predictions, uh, that the models are making and actually then maybe skip out some steps, perhaps like the animal testing is not needed anymore. Maybe we can go up the dosage, uh, uh, ladder quicker, um, because you can rely on these models. So I think we got to do it in two steps, solve the drug design problem first and then look at the regulatory, uh, length of time it takes. Speaking of regulatory, AI safety is a a big topic and a big concern. I think it was and again, I I watched it last night over dinner, which was a great watch, which was obviously the documentary and I think it was Stephen Hawking who said, we must get it right because we might not get another chance. Do you think that's right? Yeah, I do think that's right. I think that is the the the the stakes, uh, that that, uh, you know, we have to deal with and, um, you know, there's two things I worry about. One is uh the misuse of these systems by bad actors and they can be repurposed, these are dual purpose technologies. They can be used for incredible good in science and health as we just discussed, but they can also be repurposed for harmful ends by a bad actor. So that's one issue. Second issue is a technical one, making sure these systems as they get more powerful, not today systems but maybe in a year or two's time when they become more agentic, more autonomous, as we get towards AGI, um, can they be kept on the guard rails that we want? Um and I think regulation, the right kind of regulation could help here in terms of making sure there's at least sort of minimum standards from all of the, uh, uh leading providers. But it needs to ideally be a kind of international, uh, standards. What is the right kind of regulation? And again, I'm kind of quoting yourself back in this we you like, I think we need more global coordination which worries me because we're getting worse at it. Yes. Which I think would be an unwavering truth. Yes, for sure. I mean, that's what there's sort of crazy the timing that we're in, right? With this most consequential maybe technology the world's ever seen at the same time as a very fragmented sort of international, uh, uh, system. And uh it's not ideal, but I think we're going to have to try and do the best we can to at least come up with a sort of set of minimum maybe minimum standards, some benchmarks that test for undesirable properties, for example, deception. You don't, you know, nobody wants should be building systems that are capable of deception because then, um, they could be getting around other safeguards. Um and then I imagine, you know, if things go well, some kind of certification process that basically, it's almost like a kite mark of, you know, quality that this model, um has certain, uh, uh safeguards and certain guarantees.
[17:19]Uh and so therefore consumers and companies can safely sort of build on top of it. And I think that will is how it should go ideally. Um but it does have to be international because of course these systems are cross-border and you know, they're they're cross-territory. Who is that ultimate verification system? Why, you know, you obviously started with theme park. Yes. uh, a long time ago. Yes, brilliant. Don't put the burgers down too close to the roller coaster. Um but, you know, obviously there's a media company. I go through any media platforms, I don't know what's real or fake. I'm always having to ask what what's real or fake. Who is that arbiter of verification? Yeah, well, I think there are, I mean, ultimately it's got to be government, I think. But um, you know, the kind of technical bodies that would, um, be able to do the technical work would be like maybe the AI safety institutes. You know, there's a very good one in the UK that uh, uh, you know, was set up under Prime Minister Sunak and I think it's doing great work and there's one in the US and maybe some of the leading countries that have the best research should also have an equivalent body that is staffed with high quality researchers too, um, that can actually evaluate and audit these kinds of systems, uh, against certain benchmarks and, um, I kind of like independently check whether they are meeting the right standards. If I could give you like a magic wand for that was only applicable to AI safety sadly. Uh what would be your implementation idea program that you would put in place with this magic wand? Yeah, I think we need some kind of um, uh, international body, maybe similar to the atomic agency, something like that that perhaps the the AI safety institutes sort of feed into and the research community has to also do this and be involved in like what are the right set of benchmarks to check what types of traits, what types of capabilities.
[19:10]Um maybe there are other safeguards too like, um, you know, it wouldn't be desirable to have uh AI systems, um, output tokens that are not human readable. So, you know, in some kind of machine language that we couldn't understand. I think that would you know, uh introduce a new vulnerability. So there's quite a few sort of things like that which I think most of the leading labs, uh, would agree are probably not best to do.
[19:39]Um and then these uh, you know, these institutions would test against those things. And I think that would give the public confidence and, um, and you know, academia could be involved as well as civil society that these uh systems which are going to get incredibly powerful have been independently checked and audited. Answer, your magic wand's done now. That was that was the one. Yeah, maybe I used on the wrong thing but yeah. Exactly. Exactly.
[19:59]Um okay. We're going to do a quick fire round. Yes. You need me loan for the first time.
[29:23]How was that? Oh, yeah, it was amazing. Um, it was at a, it was at a Founders Fund cause we were both SpaceX and DeepMind were part of a same portfolio, a kind of amazing portfolio that Peter Thiel had at Founders Fund. And, uh, I think we were both invited, I think I was invited to my first portfolio kind of conference. Uh I think it must have been back in 2011 or 2012, very early days. So we were the small little upcoming thing and I had a small speaking slot and then and then and then Elon was the, you know, big thing in that portfolio. So he had the keynote. But then we met afterwards. I think it was in Elon says it was like, we were passing each other in the bathroom or something. And, uh, we said hi and we both hit off, you know, immediately like, uh, as sort of, you know, people that were, uh, almost too ambitious in their thinking perhaps and love sci-fi and, um, and I really wanted to visit his rocket factory. So I was sort of trying to get an an angle to an invite to to SpaceX and and, uh, in LA and I think I got there a couple, you know, he invited me at the end of that meeting and and and that was our second meeting in in the SpaceX factory. I love it. Now your speaking slots as big as his. I don't know about that. Healthcare revolution disease eradication that you're most excited about. Again, for me it's specifically with multiple sources. Well, look, I want to literally cure cancer. I know people said that's the cliche, but I actually what we're building at Isomorphic is general purpose. So we're trying to build a a platform, a drug design platform that will be applicable to any therapeutic area. So ideally it will help with everything from neurodegeneration, cardiovascular, immunology, cancer. Those are the ones we're we're focusing first, but eventually it should be applicable to every disease area. What are you thinking about that you're not reading about or seeing anyone talk about? Um, I think it's more, so I think a lot of people are worrying about the economic questions around AGI, uh that we talked about earlier, but I I worry a lot about the philosophical questions around it.
[31:21]Like when it comes, let's say, we assume we get the technical right, let's assume we get the economical economics part of it right, both of those are hard, then there's a philosophical question of what is meaning, what is purpose? Um we'll find out won't be what consciousness is. Um what does it mean to be human? I think that's, uh, what's coming down the road and I think we need some great new philosophers to help us to help us, uh, navigate that.



