[0:00]I'm Mike Allen, co-founder of Axios. We're talking today to Sam Altman, co-founder and CEO of Open AI. We're talking about super intelligence, the models that they have coming and what DC and you need to do to get ready.
[0:19]Sam, you're out with a very ambitious new agenda for Washington, business, society to prepare for super intelligence. And as I studied this, the number one word that came across to me was urgent. You're saying the outside world doesn't get what's coming. What are you seeing that we haven't seen yet? Yeah, first of all, these are ideas. This I I I think an agenda is too strong of a word. Um we want to put these things into the conversation. Some will be good, some will be bad. uh, but as you said, we do feel a sense of urgency, and we want to see the debate of these issues, uh, really start to happen with seriousness. Um, again, there will probably be many more solutions, uh, better solutions than we can think of ourselves in the broader world. Um, but we do think that the time is now to get serious about debating these issues so that we can begin to have some solutions in place before the change that we see coming really comes. Now, we may be wrong. Uh it is possible that even if the technology stays on the path that we expect it to stay on, society takes much longer to adapt some of these changes than um, then we would expect. But it does feel like now AI is beginning to do increasing amounts of real work and some of the transformations that we've been wondering about uh, over the last few years feel like they're starting to happen. So, as these models get very good at writing code, at doing other kinds of knowledge work, particularly as they get good at doing kinds of new science. Um, the economy, the way our society functions, the shape of work, uh, we we think those things are going to change and we'd like to see people begin to think about how that how that should go. So Sam, a lot of these ideas aren't palatable in today's Washington, but they're going to be music to the ears of a lot of nervous people out there who are worried that AI will uppend their lives, leave them broke. How do you bridge that gap? Part of what we try to do throughout the history of open AI is just talk about with appropriate uncertainty what we see coming. And we have not always been right, but sometimes, pretty often, I think we have been. And and I think we have a responsibility as to play a sort of role of an educator on what we see coming down the research pipeline. For a long time, we've said things that at the time we said them originally were considered like, you know, some more polite version of crazy. And again, sometimes we've been wrong and we'll be wrong about some things in the future, too. But part of our job, and I think part of how we can contribute to society is say, here's what we see coming. We may be wrong, we may be right.
[3:04]If we are right, here are the kinds of ideas that we think might work given the particular shape of what is coming. But it's really not up to us, obviously, not up to us to decide. So, our goal here is to, uh, put some ideas out early for debate. And in the same way that the ideas we were putting out a few years ago were early, but some of them turned out to be important, uh, I think some of these ideas will turn out to be important, too. The the next, we talked about this earlier, but already the models are incredibly capable and having, you know, real impact on science and the economy, too, of the most important drivers of how we live our lives, our quality of life. But the next generation of models, uh, will be a very significant step forward. And as we prepare to get ready to launch those, uh, I think this is an important conversation to have. Sam, put some meat on that bone, like a very significant step forward. This is like Godlike power, a lot of fears about the cyber attack ramifications of this. Like help us understand what's coming. What's under the sheet, what's in the garage? Um, let's say that the current models can help scientists make small discoveries. I really don't want to overstate what the current models are capable of, but you see these amazing things from scientists on, you know, Twitter or other platforms saying, I use GPT 5.4 and it helped me do this amazing thing or that amazing thing. And, um, I would expect that with the next class of models, you start to say people, you start to see people say this helped me make the most important discovery of my decade or maybe my career. So that kind of level, like, you know, maybe not winning Nobel Prize on its own, but like a significant career defining discovery. Uh, on the kind of knowledge work side, I suspect that, you know, current models can you hear people say maybe they're like twice as productive or three times as productive as they used to be as a coder. And maybe you'll start to hear people, uh, say, I'm able to do like the work of a whole team with these tools. So if it's, you know, me and X hundred GPUs, um, we can do the work of a whole software team. So, that's quite significant. On the, on the scary side, the main areas we currently track in our preparedness framework are cybersecurity, bio, uh, and actually, any initial time, let me stop at those two. I suspect in the next year, we will see significant threats we have to mitigate, uh, from cyber. And these models are already quite capable and we'll get much more capable. And then on bio, uh, this is something we've been talking about a lot. Uh, the models are clearly going to get very good at helping people do biology at an advanced level. Wonderful things are going to happen there. We'll see a bunch of diseases get cured. Um, someone is going to try to misuse those and for now, when the models, the frontier models are all sort of in the hands of pretty responsible companies. I think we can mitigate those by the companies aligning the models and having good classifiers and good safety stacks. But we're not that far away from a world where there are incredibly capable open source models that are very good at biology. And the needs for society to be resilient to terrorist groups using these models to try to create novel pathogens is like, that's no longer a theoretical thing or it's not going to be for much longer. So part of the reason that we are trying to push also in this, uh, blueprint ideas around societal resilience is a realization that AI safety or safety in a world of powerful AI cannot be done by the companies alone. When you mention cyber and the next year, something that Jim and I have heard a lot uh from the AI companies is, there could well be a world-shaking cyber attack this year that would get people's attention. It sounds like you agree with that. I think that's totally possible. Yes, I think I think to avoid that, uh it will require a tremendous amount of work, also in a sort of resilience style approach. Again, it's not just like make one AI model safe, it is defenders. Uh, we have this thing called the Trusted Access Program, other things, other companies about other things, but, you know, cybersecurity companies, the major platforms, the governments using this technology to try to rapidly secure their systems, the open source stack, all of that. Um, that's that's quite important now. Last one on this, given the overwhelming power that you see AI having, what's the case against nationalizing open AI and your competitors? I actually think in a different time, we used to say this a lot, uh, the government should be doing this kind of work, and in a different time, I think it would have happened. If you look at some of the great expensive infrastructure projects of history or just scientific progress projects, um, things like the Apollo program, the Eisenhower highway system, the Manhattan project, uh, these were government projects. And in a different time, uh, I think the creation of AGI would have been a government project, too. Um, I don't think in the current way the world works, uh, that is likely to be successful. And so I think the biggest case against nationalization would be that we need the US to succeed at building super intelligence in a way that is aligned with the Democratic values of the United States before somebody else does. Um, and that probably wouldn't work as a government project. I think that's a sad thing.
[8:56]Um, however, I do think absolutely the companies developing this and the government have to work extremely closely together. I don't think this works is like a standard where a company and you're the government, there's going to have to be very, very deep partnership here. But that's not happening now. Well, we're trying I I I think we're working very closely with the government. And what's the most important next step? Or what's the gap that needs to be closed fast, like this year? Well, you mentioned the great one already, uh, which is cybersecurity. I think bio security is another really important one. And then there are some other things here like building an infrastructure, some of these economic ideas in the blueprint, uh, that maybe those don't have to be closed this year, but pretty quickly. How far are we from super intelligence? How far are we from AGI, artificial general intelligence, human-like capability for your models? We're close enough to AGI that, uh, the precise definition matters. Some people would say we're already there, some people would say we're not there yet. Um, the fact that AI is discovering new, legitimately new scientific knowledge and the fact that AI is doing serious, valuable economic work at real scale. Um, means wherever you think we are on that curve and whatever wherever you want to label AGI on that curve, uh, we're quite far along. Like this is we are in a new, a new paradigm here of some sort. Faster than people expect, faster than people are prepared for. I would say already, uh, major swaths of knowledge work have been transformed. The world has, you know, adopted that at different rates. But certainly what it meant to be a coder at the beginning of 2025 versus the beginning of 2026 were very different things. So Sam in your new industrial policy, you're saying as vividly as you ever have, and in this interview, you're saying as vividly as you ever have that AI will transform every aspect of our lives. Why should people trust you to be at the forefront of it? First of all, not literally every. I think there will be many things, and I think this is wonderful, where we only care about other people. We will spend more time with other people. People will remain the most important part of our lives, you know, we'll have these incredibly smart machines doing stuff around us. And the most fundamental part of what it means to be human, that's not going to transform at all. And I think this is really important, like, the society may change in a lot of ways. What it means to be a, you know, what it means to be a person, what it means to have a fulfilling life, I think that's, that's pretty deep and that's not going to change.
[12:00]A lot of other things about the economy, the way society works, those may change a lot. Um, but I think I just really want to get that point across. Um, on the trust point, it's incredibly important that people building AI are high integrity, trustworthy people. And I think almost everybody involved in our industry feels the gravity of what we're doing. Um, and so we all take that responsibility very seriously, we feel that weight every day. We also think it's very important that no one person is making the decisions by themselves that are going to impact all of us. I don't think we should have to trust a single person to get every decision right. Uh, we really believe in the democratization of AI. We really believe in putting this tool in the hands of people and letting people understand it, figure out what rules society collectively wants to put around it. And most importantly, how we're going to integrate it into our lives and have all the these incredible benefits that we think are possible. So, punching into your ideas. This isn't Trump friendly by any means. One of our editors said this reads like a Bernie Sanders fever dream. Has any of this been presold to the administration or to the hell? Or how do you plan to move the ball on these ideas in this political policy environment? Uh, we we've talked to Democrats about some of these ideas, we've talked to Republicans about some of these ideas. Again, as I said earlier, I'm sure they aren't all good. Uh and the goal here is really to start a conversation. Um, that said, the receptiveness from all directions to the fact that we are going to have to try some different things and the sooner we can start talking through them, the better, has surprised me on the upside. Uh, and one of the like staunchest free market, libertarian Republicans, whatever you would call himself, people that I know said something like, look, I hate to give any airtime to ideas like these, but capitalism has depended on some balance between labor and capital. And if AI goes the way, I think this person talking thinks it's going to go, then way too much leverage is going to be with capital and not with labor in the traditional sense, even though of course, there will be new jobs. And we're going to have to think about new kinds of ideas to sort of keep capitalism thriving. And what that idea is, I don't know, uh, but if that dude is willing to say it's time to think about how to how to include more people in capitalism, and the way that we do that may have to shift with AI, then my sense is society as a whole is pretty ready for that conversation. That dude is a Republican senator? Uh, Republican policy person. So, like a member of the administration, a cabinet secretary. Uh, like a senior, I don't want to say exactly, but like a senior Republican person. Right, got it. Uh, so, for our viewers, let's be super specific. What is most doable in your blueprint and what's your moonshot? Honestly, I think where there's the most energy right now for something tactical is around energy, electricity prices, and the desire to figure out a way that we're going to build more energy faster to keep electricity prices down. seems to have quite a lot of political support. So that's one thing where I would say like, this is pretty standard. There's other ideas like, um, you know, privacy and AI that also I think something will happen and it's pretty non-controversial and there's like quite quite a bit of support. Um, and then and then there's like ideas in the middle about like, okay, how much auditing should we have on frontier systems? We don't want to overregulate, but we don't want to big risk, so I'd say those are kind of like feel more complex. And then the ideas we were talking about a little bit earlier, uh, like the big sort of shifts to how you make a change to the tax system or the economy, like, in a larger sense, uh, I'd say those are still now in the Overton window, but near the edges. So, to zoom out for a second, Sam, the premise of all of this is a functional government. Where are you going to find that? Uh, in my lifetime, I don't think we've ever had a super functional government. And yet, we have managed to make great strides as a country. Uh, I think part of a democracy is that you have to manage through some chaos and stuff is not always as easy as you would like. Um, I'm I always biased like too optimistic, but I assume we'll figure it out. Having and seeing the power that you personally have and by extension Open AI. How is that changed you personally? It's a very weird experience for sure. Um, I I remember a conversation I had with somebody who works on our post training team, the thing that sort of like the the team that decides how Chat GPT is going to feel to talk to. And she was saying something like, you know, never before in human history have close to a billion people been talking to the same like virtual brain. And these changes that we make, you know, these like subtle changes to, does it, you know, talk a little bit more in this style, a little bit more in that style. The the impact of that is just not something that any of us, that no single person could have, like, you know, no single person can talk to a billion people every day or whatever. And so obviously you, you know, you try to make the best decisions that you can. Um, and when you're in the grind, you don't often feel like, I'm doing this like, you know, big historic weighty thing. You're just kind of like trying to solve whatever the tactical problem in front of you is. But the rare times where you do get to step back and think about like, man, we're only one company here, there's many companies developing this. I'm only one voice inside the company, obviously a big one, but this is, it's like an unbelievable, honor, cool thing, scary thing, all together to get to like, be in this moment. Uh, the the strongest I ever felt it was when we were, we had finished training GPT4. We didn't release it for many months after we finished it, but we knew we had this incredible thing. Um, we knew that the world was never quite going to be the same and we knew that the way we released it was hopefully going to nudge things in a better but possibly a worse direction. And that time where this relatively small number of people in the world knew this thing and we couldn't talk about it yet and we knew that it was going to have this gigantic impact. Uh, I think there's no way to live through that and not have it change you for good and bad, mostly for good, but in all sorts of ways. How for bad? Um, you live with like, or I live with a lot of stress, anxiety, weights, whatever, and I don't think I have a very normal life anymore. Did you have a normal life before? Uh, relatively to now for sure. Uh, thank you. Uh, quick, rapid round, my nephews are early career, they're very entrepreneurial. When they found out we were talking, they all had the same question, and it was about the future of AI pricing, and specifically they picked up on your quote, we see a future where intelligence is a utility, like electricity or water. How does that work? I didn't necessarily mean that about pricing, but I'll get to that, too. What I meant, like, you know, you pay an electricity bill to your house, and then you plug in whatever you want. Like you have your computer, your oven, you know, like my kids' toys, whatever, all use this electricity. And I get to use it in all sorts of ways, and they sort of like, people can come up with these brilliant things, and I plug them in. And then, you know, I don't really think about it, but there's this like amazing amount of work to generate electricity, get it to my house at a kind of amazingly cheap price, all things considered that's come down hugely over time. And once a month, I pay my electricity bill and I get to do all this like amazing stuff, and I think about the stuff, not really the electricity. I think AI will be the same. You will have this personal super assistant running in the cloud. It will do a lot of work, it'll plug into devices, different products. You'll log into all these different services with you, it'll be running in the background, thinking for you, doing all this stuff. And, you know, if you use it a lot or use it at high levels of intelligence, you'll have a higher bill one month, and if you use it less, you'll have a lower bill. Um, but you'll have this intelligence that really knows you and you can kind of take it anywhere. And and in the same way that you probably don't think about electricity all the time, you just think about all the things you use that are powered by electricity. Um, and then you have this like one service, this utility. Uh, that's what we meant. Um, on the pricing side, um, the the price per unit of intelligence, I suspect to continue to will will continue to drop incredibly, if we can build enough infrastructure. There's so much demand for this in the world that I can see like a very weird supply constraint, and then prices will go up. I think that would be bad, that's why we should build so much, but assuming we can build enough, I think like the price per unit will continue to fall at the rate it's been falling, which is astronomically fast.
[21:49]I also think the world clearly wants bigger smarter models, and those will cost a premium until their cost comes down, too. Sam Altman, thank you for joining Axios behind the curtain. Thank you.



