Thumbnail for Demis Hassabis: Agents, AGI & The Next Big Scientific Breakthrough by Y Combinator

Demis Hassabis: Agents, AGI & The Next Big Scientific Breakthrough

Y Combinator

40m 42s6,401 words~33 min read
Auto-Generated

[0:00]continual learning, long-term reasoning, uh some aspects of memory, these are still unsolved. I think all of these are going to be required for AGI, depending on what your AGI timeline is, you know, mine's like 2030 or something like this. Then if you start off in a deep tech journey today, you have to just consider AGI appearing in the middle of that journey. It's not bad necessarily, but you have to take that into account. You have to have an active system, uh, that can actively solve problems for you to get to AGI. So agents are that path, and I think we're just getting going.

[0:40]Demis Hassabis has had one of the most unusual careers in tech. He was a chess prodigy as a kid, then designed his first hit video game, Theme Park, at 17. He then went back to school, got a PhD in cognitive neuroscience, published foundational work on how memory and imagination work in the brain. And then in 2010 co-founded DeepMind with one mission, solve intelligence. And I think they've done it. Since then, his lab has gone on to do things most people thought were decades away. AlphaGo beat a world champion at Go, AlphaFold cracked protein structure prediction, a 50-year grand challenge in biology. And they gave it away for free to every scientist on Earth. That work won him the Nobel Prize in Chemistry last year. Today Demis leads Google DeepMind, where he's building Gemini and pushing toward the same goal he set when he was a teenager, artificial general intelligence. Please welcome Demis Hassabis.

[1:53]So, you've been thinking about AGI longer than almost anyone. Uh, when you look at the current paradigm, large-scale pre-training, RLHF, chain of thought, how much of the final architecture for AGI do you think we already have and what's fundamentally missing right now? Well, first of all, thank, thanks Garry for that great introduction and it's great to be here. Thanks for for welcoming here. It's an amazing space, actually. I'll have to come back here often, very inspiring that you all get to work in in in this space. So the question is I think the the components that you just mentioned, I'm pretty sure will be part of the final architecture for AGI. So I think they've come such a long way now and we've proven out so many things about what they can do. I can't see a world in which we'll sort of realize in a couple of years this was a dead end. That doesn't make sense to me. But there's still might be one or two things missing on top of uh, of of what you've, you know, what we already know works. So, um, continual learning, long-term reasoning, uh some aspects of memory. These are still unsolved, um, and how to get the systems to be more consistent across the board. I think all of these are going to be required for AGI. Now, it might be that the existing techniques can just scale up to that with some innovation and some incremental innovation, um, but it could be that there's still one or two big ideas left, uh, that need to be cracked. I don't think it's more than one or two if there are out there, and I think, you know, my betting is, uh, about 50-50 if that's the case. So, of course, at DeepMind, at Google DeepMind, we work on both those things. I guess that's already. What do you mean, working with a bunch of identity systems, the wildest thing to me is to what degree, it's the same weights over and over. So this idea of continual learning is so interesting because like, you know, right now we're sort of cobbling it together with duct tape, you know, these dream cycles at night and things like that. It's pretty cool the dream cycles and we, we used to think about this with consolidation with episodic memory. Actually that's what I studied for my PhD is how the hippocampus works and integrates, you know, new knowledge gracefully into the existing knowledge base. So the brain does that amazingly well, it it it does it through, you know, during sleep, uh, especially things like REM sleep, replaying back episodes that that are important so that you can learn from it. In fact, our very first Atari program DQN, one of the ways it was able to master Atari games was by doing experience replay. So we sort of borrowed that from from neuroscience and replayed successful trajectories, uh, many times. You know, that's way back in 2013 now in the dark ages of AI. It was uh, a really important thing, and I and I agree with you, we're kind of using duct tape right now. So like shove it all in the context window, um, this but it seems a bit unsatisfactory, right? And actually, even though uh, we're working on machines, not biological brains, and so potentially you could have, you know, millions or tens of millions size context window or memory and it can be perfect.

[5:12]There's still a cost to looking it up and finding the right thing that that's actually relevant for the specific, uh, decision you've got to make right now. And that's non-trivial that cost, even if you can potentially store it all. I think there's actually a lot of room for innovation in in areas like memory. Yeah, I mean, the wild thing is that it feels like a million token context is actually bigger than, I mean, it's plenty big, honestly. You can do stuff. Well, it's it's it's plenty big for for for most things that it should be used for. I mean, if you think about the context window is sort of equivalent to working memory, you know, humans have, we have like a few digits, you know, it's like a dozen digits maybe, you know, average of seven. We've got million or, you know, 10 million context windows, but the problem is, is that we're trying to store everything in that. You know, things that aren't and not important, things that are wrong, it's pretty brute force currently. And that doesn't seem, uh, right. And then the problem is, if you're now going to try and process live video, and you're just going to naively record all the tokens, then actually a million tokens isn't that much. It's only like 20 minutes. So actually you need more if you want something that's going to understand your, you know, your what's going on in your life over maybe a month or two. Uh DeepMind has historically leaned into reinforcement learning and search. AlphaGo, AlphaZero and MuZero. Uh, how much of that philosophy is actually embedded in how you're building Gemini today? Uh, is RL still underrated? Yeah, I think potentially it is, it's sort of goes in in ebbs and wa-waves. You know, we've worked on agents since the beginning of DeepMind, in fact, we also that's what we said we were working on. So all of the Atari work and AlphaGo and most specifically, they're agent systems and what we meant by that is systems that are able to, you know, accomplish goals on their own and make active decisions and and make plans. And so, of course, we were doing it in the domain of games to to to make it tractable, uh, and then doing increasingly complex games, things like StarCraft, after AlphaGo AlphaStar. So, um, we basically did all the games that are out there. Um, and then, of course, the question is can you generalize those models to be world models or models of language, not just models of simple games, or even complex games. And that's what the last few years has been about. But really, you can think of a lot of the things we're doing today, all the leading models with thinking modes and chain of thought reasoning as aspects of what was sort of pioneered with AlphaGo coming back now. And I actually think there's a lot of work we did back then that is relevant today and we're sort of relooking at some of those old ideas, um, at scale today in a more general way. Including things like Monte Carlo tree search and other other ways of doing augmenting the RL, uh, on top of the the reinforcement learning we're ready to do today. And I think a lot of those ideas both from AlphaGo and AlphaZero are really really relevant uh to to where we are with today's foundation models. And I think a lot of that is what we're going to see of the advances in the next few years. One question I would have is like, obviously, today you need bigger and bigger models to be smarter and smarter, but then we're also seeing distillation working, and then smaller models can be like quite a bit faster. I think, you know, you guys have incredible flash models that are like 9- like you're finding that they're 95% as good as uh, the frontier and at like 1/10th the price, is that right? I think that's one of our core strengths. Is I mean, you have to build the biggest models to to to have uh, the frontier capabilities. But I think one of our biggest strengths has been uh, distilling and packing that power into smaller and smaller models very quickly. Obviously, we we, you know, we invented the kind of distillation process and people like Jeff and Oriol and and others. And we're still world experts in that. And we also have a huge need to, uh, do it because we've got to serve the biggest probably AI surfaces, um, there are. Obviously there's search with AIO views and AI mode, then there's Gemini app and now increasingly every single product at Google has, you know, Maps and YouTube and so on has some aspect of Gemini or Gemini related technology in it. And so that's billions of users a dozen, more than a dozen billion user products. Um, and they have to be served extremely fast, extremely efficiently and cheaply and with low latency. So that that gives us a really important incentive to to make these flash and even smaller models, flash light models extremely efficient. And hopefully that ends up being really useful for many of the workloads that all of you use for. I'm curious about how much smarter these smaller models can actually be. Like are there limits to the distillation process? Like could a 50B or 400B model be as smart as like a mythos for today? Yeah, I don't I don't see any, I don't think we've got to any kind of or at least none of us know yet if we've got to any kind of information or limit. I mean, maybe at some point that will be the case where there's just an information density that can't, we can't get beyond. But I think for now, there's the the assumption we make is that, you know, a year later after one of our, uh, leading, you know, pro models or frontier models goes out, half a year later, a year later, you'll have them in the the really tiny, almost edge models. And you also see some of that goodness in our Gemma models, which hopefully you all enjoy in our Gemma 4 models, which I think are really amazing power for their sizes. So again, that uses a lot of this, uh, these distillation techniques and and the idea of how to make things really efficient in these very small models. So I don't really see any limit yet in terms of like some kind of theoretical limit. I think we're still pretty far off of that. That's amazing. I mean, that is really good because, uh, you know, one of the weirder things that we're seeing right now is like engineers can do like 500 to 1,000 times the amount of work that they were doing like six months ago, I guess. I mean, the people in this room, they're people who are doing about like a 1,000x the work that like I- Stevey Yegge talks about this. It's like a 1,000x the work that a Google engineer from the 2000s was doing.

[12:04]I think it's very exciting. I mean, I think the small models have many uses. One is obviously cost, but the speed can allow, you know, if you think about coding even or other things, you can iterate a lot faster. Also, especially if there's if you're collaborating with the system, I think there's a there's a a lot of need for having fast systems, um, that maybe are not quite frontier, like you said, like 95%, 90%. But that's plenty good enough and actually you gain back more than the 10% on the the iteration speed. So, and then the other big thing I think is running these things on the edge, again, for efficiency reasons, but also for privacy and security reasons, too. Um, if you think about different devices that you might run these systems on, that that, you know, process very personal information. Can also think about robotics as well, um, you know, robots in your house. I think you're going to want very efficient, uh, very powerful, uh, local models, which maybe orchestrated, you know, with some bigger models, frontier models that in the cloud, but you only delegate to that in certain circumstances. And perhaps you, you know, you process all of the audio visual feed, let's say, locally and that stays local. I could imagine, uh, that would be a very good sort of, um, end state. I'm curious about how much of it will be autonomous versus, I mean, I don't think we'd see autonomous first. We would actually probably see people in this room operating at a thousand X, and then. That's what you should see first and then many of you, you know, there'll be like games companies or, you know, other types of companies that have built some kind of best-selling app, best-selling game using, uh, these tools. That's what you should see first, and then more of that will get automated. I mean, some of it is like there's a human in there, and then the human doesn't want to say that the, the agents did it. I think part of it might be, though, that, um, this we want to discuss like creativity. What I often say about that is like if we look at the things we've done like AlphaGo. So, obviously, very famously, you'll all know about the move 37 in game two, and for me I was waiting for a moment like that to start the science projects, like AlphaFold. So we started AlphaFold like the day we got back from Seoul, which is 10 years ago now, so I'm going to career it after this to celebrate the 10-year anniversary of AlphaGo. But it's not enough to come up with move 37. Like that's pretty cool, very useful, um, but can it invent Go? That's what I I want a system that can invent Go if you give it a high-level description, you know, like a game you can learn the rules of in five minutes, but it takes many lifetimes to master. It's beautiful aesthetically, um, but you can play it in a few hours in an afternoon. So, you know, maybe you could imagine that would be the high-level description I would give and then I'd want the return, the thing I get back is Go, right? And, um, clearly today systems, I think can't do that. So the question is why, um, and I think there's something still missing there. Well, someone in this room might might make it. Then the answer would be there's nothing missing, it just was the way we were using the systems. And that might actually be the answer. It might be that our today systems are capable of that with a brilliant enough creative person using it and providing that impetus, that's the soul of the project and being able to probably being, uh, off enough with the tools to like almost be at one with the tools. I could imagine that would be happening if you experimented with the tools all day and all night like probably many of you are doing that, and you combine that with proper deep creativity, um, something, you know, more incredible could be done. Switching gears to open source. I mean, or open open and ex- open weights. I mean, the recent release of Gemma, you're making highly capable open and accessible ones that can actually run locally. What do you think that means for, you know, will AI be something that is in the hands of the users instead of primarily in the cloud and does that change who gets to, you know, build with these models? We're huge proponents of in general of open source and open science and you mentioned AlphaFold at the beginning, you know, we put that all out there for free. And all of our science work, even still today, we publish in, you know, the big journals. We wanted to create, uh, world-leading models for their their sizes. And so that's what hopefully we've done with Gemma, and we're, you know, very committed to that path and hopefully you all experiment and build and and enjoy using Gemma. I think it's been like 40 million downloads now, and, uh, in just, you know, two and a half weeks. So we're really excited about that, and I also think it's important for there to be Western stacks on open source. You know, obviously a lot of the Chinese models are excellent and and they're currently well well leading in open source, and we think Gemma is very competitive for its sizes, uh, in in all those respects. And for us, I mean, there is a question of resources, talent, and compute. Like nobody has enough spare compute to just make two, you know, uh, frontier models at maximum size, right? With different attributes, so that's pretty difficult. But also what for now what we've we've decided is that our edge models, the things we want to use for Android and glasses and robotics, um, it's best that they're open models because they're vulnerable anyway on the once you put them out on the surfaces, so they might as well be actually fully open, right? So we've sort of made a decision to kind of unify that, uh, at the at the kind of we call it nano size level. So that actually works for us, uh, strategically as well, um, and, you know, we hope as many people as possible build on it. And of course, we'll be building on that too. Earlier, uh before we came on, I got to show you a demo of uh my version of Samantha from her, which is uh harrowing for me to try to demo something to you. Um, and it worked, which is amazing. Yeah, very good. Um, I mean, Gemini was built multimodal, and I spent a lot of time with a bunch of the models and I mean, the depth of the context and the tool use with speech directly to model. Like there's nothing, like Bar None, like the best one actually. Yeah, I think I think that's just sort of still a slightly underappreciated aspect of of of the Gemini series. Is we we started it being multimodal from the start. That made it a little bit more difficult actually to begin with because then just focusing on text, for example. But I we, we believe we're going to gain from that in the long run. And I think we're seeing that now for, uh, things like world model building, so stuff like Genie that we build on top of Gemini. I think it's going to be really important for things like robotics. So this is why Gemini Robotics, which many of you probably played around with. I think it's going to be built on multimodal foundation models, the robotics models. And we think we have a sort of competitive advantage with with Gemini being so strong at multimodal. We're using it increasingly in things like Waymo, um, but also if you imagine devices and assistants, uh, that digital assistant that come with you into the real world, you know, maybe on your phone or glasses or some other device. Um, it needs to understand the physical world around you and intuitive physics, uh, and and the and the physical context you're in. And that's what our systems are extremely good at, and I think you found that's why you've enjoyed using it in your setup. We're planning to continue on that, and I think we're far and away the strongest models on on those types of, uh, problems.

[21:18]So the cost of inference is uh, dropping fast. What becomes possible when inference is essentially free and how does that change what your team is actually optimizing for? Yeah, I'm not sure inference will ever be essentially free. I mean, there's sort of Jevons' paradox and other things about like, I think we'll just end up using, all of us will end up using whatever we can get our hands on. And you could imagine, uh, millions of agents, swarms of agents working together on things, that's one way to use the inference, or you could imagine, uh, single agents or group smaller groups of agents thinking for in multiple directions and then assembling that. So we're experimenting with all these things, probably many of you are. All of that will use up any inference, I think that's available. I mean, one day, maybe it can be almost cost zero, certainly the energy if we solve fusion or, you know, superconductors or, you know, optimal batteries or some set of those things, which I think we will do with material science. Energy cost will be essentially zero, but there'll still be the physical creation of the chips and other things, there'll still be some bottleneck, um, at least for the next few decades, I think. And so if that's the case, there'll still be rationing on the inference side. You still have to use it, I think, efficiently. Yeah, well, luckily, the smaller models are getting smarter and smarter, which is fantastic. Uh, we got a lot of bio and biotech founders in the audience. I can see a few. AlphaFold 3 took us beyond proteins to a broad spectrum of biomolecules. Uh, how close are we to modeling full cellular systems, or is that still a fundamentally harder problem in a class of its own? Well, our isomorphic labs, which we spun out from from from from DeepMind after we did AlphaFold 2. Um, it's it's which is going amazingly well. It's it's it's trying to build out, uh, not just AlphaFold, it's just one piece of the drug discovery process. Uh, as many you know, but we're trying to do the adjacent biochemistry and chemistry to design right compounds with the right properties and so on. We'll have some big announcements for, you know, very soon to talk about on on that front. I think that's going really well. Eventually, you want a whole virtual cell. So I've talked about this in many of my science talks about a full working simulation of a cell that you can perturb and then the, you know, the the outputs of that would be close enough to experimental that it's useful, right? You could skip out a lot of the the search steps and generate lots of synthetic data to train other models that then would predict things about, you know, real cells. And, um, I think we're about 10 years away probably from something like a virtual cell, like a full virtual cell. You know, we're starting out, this is what we're working on the DeepMind side, science side, on a, you know, virtual nucleus. Cell nucleus first, because relatively self-contained. The trick with all of these things is can you pick a slice of the complexity, you know, eventually we want to model a human body, but can you model it down to the right level of detail, and what slice can you, uh, take out of it that will be self-contained enough? You can kind of model and approximate the inputs and outputs into that self-contained system and then just focus on the self-contained system. So a nucleus is quite interesting from that perspective. Um, then the other issue is just there's not enough data yet. So you need data, uh, and I talk to various, you know, top scientists about who work on electron microscopes and other imaging things. If we could image a live cell without killing the cell, that would be, um, game changing, obviously, because then you could convert it into a vision problem, which we would know how to solve. But at the moment there are, at least I don't I'm not aware of any techniques that can give you a kind of, you know, nanometer resolution, uh, but without destroying in, you know, in a live dynamic cell, so you can see all the interactions, right? You can take static images at that resolution, obviously, um, really detailed now, and that's quite exciting. But it's not enough, uh, to turn it just into just into a complex vision problem. So, that's one way it could be solved, so it could be a hardware driven, data driven solution, or we could be that we build better, uh, learn simulators of, uh, these dynamical systems. So that's that's the more modeling way of solving it. Uh, you've been looking at all kinds of science and not just bio. Uh, there's material science, drug discovery, climate modeling, mathematics. If you had to rank which scientific domain will transform the most dramatically in the next five years, what's in your list? Well, they're all so exciting, and that's why, I mean, that for me has been my main passion and always the reason why I've worked on AI for my whole career for 30-plus years now, is to use AI as the ultimate tool. I always thought AI would be the ultimate tool for science to invite such advanced scientific understanding, scientific discovery, and things like medicine and just our understanding of the universe around us. So actually, when you mentioned our original, the way we used to articulate our mission statement, which is still, uh, the way we think about it, is there were two steps. It was step one was solve intelligence, I built AGI, and then step two was use it to solve everything else. We had to change that a bit over time because people were like, do you really mean solve everything else? And we did mean that, and I think people are sort of understanding what that means today, but specifically, I was meaning solve other what I call root node problems in science. So areas of science that would unlock whole new branches or avenues of discovery, and AlphaFold is the prototypical example of what we want to do. So over 3 million researchers around the world, pretty much every biology researcher in the world uses AlphaFold now. And I was told by some of my, you know, pharma executive friends that, you know, almost every drug discovered from now on will have used AlphaFold at some point in its in the drug discovery process.

[29:40]So there's something we're very proud of, and it's the sort of impact that we hope to have with with AI. But I do think it's just the beginning. I I don't really see any area of science or engineering that this won't be able to help, be helpful with. And the ones you mentioned, I think we're almost like an AlphaFold one moment. So it's we've got very promising results, but it's not quite solved the grand challenge yet in that domain. But I think we're going to have a lot to talk about, you know, in the next couple of years on all those areas you mentioned, materials, which I'm, I think it's very exciting, all the way to mathematics. In science, I mean, it feels Promethean. It's like, here is this capability and you know. I think so. I mean, of course, along with that, including what the the the parable of Prometheus, we have to also be careful with how we use that and what we use it for, and also the misuse, uh, that can happen with those same tools. A lot of people in this room are trying to build companies applying AI to science. For them, what's the difference between a startup that actually advances the frontier in your view versus one that's just wrapping an API around a foundation model and calling it AI for science? Well, look, I think there's one of the things I would recommend, I'm trying to think about, and I think you mentioned this to me before. What would I do today myself if I was sitting in your place in Y Combinator, you know, looking at things? One thing you have to do is obviously intercept where the AI tech is going, so that's one hard part of it. But I do think there's huge, uh, scope for combining where AI is going with some other deep technology area. I just think that that sweet spot is is whether it's materials or medicine or other really hard areas of science. Um, I think that those kinds of interdisciplinary teams, especially if it involves the world of atoms as well, um, there's not going to be a shortcut to that. Uh, at least in the foreseeable future, those are areas that are pretty safe from just getting swarmed by whatever the next update is to the foundation models. So I think if you're looking for things like that, that's one of the more defensible areas, I would say. And I've always loved deep tech, so I'm kind of biased towards deep tech things. I think nothing that's really long-lasting and worthwhile is easy. And so I'm always been drawn to to deep technologies. Obviously, AI was like that back in 2010 when we started out, right? It was it was thought to just, you know, we we know it doesn't work, kind of thing is what I was told by investors and even in academia it was considered to be a very niche subject that we sort of tried in the 90s and we know doesn't work. But if you, you know, if you have belief and conviction in your idea why it's different this time or what special combination from your background that you had, ideally you're expert in both those areas, both the machine learning and the other area applying it to, or you can create a founding team with that expertise. I think there's huge impact to be made there and huge value to be built there. That's a really important message. I mean, even, I mean, it's hard, it's easy to forget like, basically, once you've done it, you've done it, but before you've done it, people are arayed against you. Oh, sure. That I mean, no one believes in it, which is why I think you've got to, you've also got to work in things that you're genuinely passionate about. Like for me, I would have worked on AI, no matter what happened. I just decided from a very young age, it was the thing that, um, could be the most consequential thing I could think of. It's turned out that way, but it might not, and maybe we would have been 50 years too early. And it was also the most interesting thing I could think of working on. And so I would have still been working on AI today, even if we were still, you know, in a little garage somewhere and it still wasn't quite working. I would have still been trying to find maybe I'd have been back in academia or something, but I would have found some way of of continuing to work on it. So I mean, AlphaFold was like an example of a spike that you pursued and it worked. You know, what makes a scientific domain right for an AlphaFold style breakthrough? And is there a pattern, a certain objective function, like the way I I should write this up at some point when I have five minutes spare, but the lesson I've learned from all the alpha projects we've done, specifically AlphaGo and AlphaFold, is, um, I think the techniques we have and the problems I look like to look for are great in if the if the situation can be described as massive combinatorial search space. The more massive the better in some ways. So no brute force or special case algorithm will solve it, and that's true of go moves and of, you know, different configurations of proteins, far more than the atoms in the universe, both of those. And then, um, you have a clear objective function. So, you know, you can think of it as minimizing the free energy in the proteins, or, you know, winning the game of Go. So you need to be you need to specify your objective function clearly so you can hill climb. And then, um, enough data and or a simulator that can generate you, uh, lots of distribution, uh, synthetic data. If those things are true, then I think, um, with today's methods, you can go a long way into tackling and finding the kind of needle in the haystack that you need, uh, for the solution that you're trying to look for. And I think of just drug discovery by the way, in the same way. Right? There is a compound out there that would solve this disease if one could find it, if one could only find it, right? And that wouldn't have any side effects and so on. And, uh, as long as the laws of physics allows it, then the only question is how do you find it in an efficient way, in a tractable way? I think we showed for the first time actually with AlphaGo that these systems could find those kinds of needles in a haystack in that case, you know, the perfect go move. I guess to get a little meta, I mean, we're talking about humans using these methods to create AlphaFold, but then there's a meta-level, which is humans using AI to explore the space of possible hypotheses. How close are we to AI systems that can do genuine scientific reasoning, not just pattern matching on data? I think we're close. Um, we're working on these general systems like that like thing we have this system called co-scientist, and we have other algorithms like AlphaFold that can go a little bit beyond what the basic Gemini will do. And obviously all the frontier labs are experimenting in this way. I've yet to seen anything so far, and we we all tinker with the same things, you know, some math problems that are a little bit harder than IMO and so on. I haven't seen anything yet, um, that is a true genuine, you know, massive discovery. That's my personal opinion. I think it's coming, I think it may be related to this earlier this thing we discussed about creativity and actually going on beyond the bounds of what's known. So clearly that's just not pattern matching at that point, because there's there is no pattern to match to. And it's a bit more than extrapolation. It's some kind of analogical reasoning. And I don't think these systems have that, or at least we're not using them in the in the right way to do that. So the way I often say that in science is, can it come up with a hypothesis that's really interesting, not just solve one? When I say just, we're not talking about just like solving the Riemann Hypothesis or something, this would be obviously amazing, or one of the millennium prize problems and maybe we're a couple of years out from doing that. Um, but I'd like to solve P=NP. That's that's my favorite one. But can you, but even harder than that would be to come up with a new set of of of Millennium Prize problems that were regarded by top mathematicians to be as, you know, deep and meaningful and worthy of a lifetime of study and effort to solve, right? I think that's another level harder, and, uh, we don't have, um, you know, I still don't think we know how to do that. I don't think it's it's magical though, I do think these systems will be eventually able to do that. Maybe we're missing one or two things. And then the way we would test that is, you know, I sometimes call it my Einstein test, which is, you know, can you train a system with the knowledge of cutoff of 1901 and then will it come up with, you know, what Einstein did in 1905, including special relativity, you know, his Annas Mirabilis. Can can it do that, right? Uh, and then, I think we could run that test, maybe maybe we should just run that test, and keep seeing if that's possible. And once that is, then I think we're on the verge of these systems being able to invent something new, truly novel. So last last question. For the people who are deeply technical in this room, who want to work on something, you know, even close to the scale that what you have created with, you know, one of the largest AI efforts in the world and you've been a pioneer for all these years. So for that, I think everyone in this room thanks you and the folks at DeepMind very very deeply from the bottom of our hearts. Thank you.

Need another transcript?

Paste any YouTube URL to get a clean transcript in seconds.

Get a Transcript