Thumbnail for OpenAI is Completely F*cked. by Moon

OpenAI is Completely F*cked.

Moon

27m 29s5,239 words~27 min read
Auto-Generated

[0:00]Open AI was supposed to lead us into the future. The company that cracked super intelligence, the non-profit that would share it with the world.

[0:09]But something has gone incredibly wrong. The developers left, the safety teams were gutted, and they started cutting loose whatever they saw as a liability.

[0:17]They just acted Sora after it hit number one on the App Store, showing they can't even back their own flagship products.

[0:24]All right, we got some breaking news out of the business world. Open AI saying it's shutting down its video generation platform Sora.

[0:29]It made just 2.1 million dollars in its entire life, doing little else but filling our feeds with AI slop.

[0:36]Now, they're heading towards going public like Meta, Google, Amazon, Nvidia and Apple, having raised the largest funding round of its type in history at 122 billion dollars.

[0:47]Even that will only last them about 18 months at the rate they're burning through cash, and the public will get a much worse deal than investors if they do an IPO.

[0:55]In their desperation to stay relevant, OpenAI just acquired the technology brothers.

[1:00]One of Silicon Valley's biggest tech shows, essentially buying the media that covers them, with rumors that they bought them for over 100 million dollars.

[1:08]And their hunger for computing power has caused what the industry is calling RAMgeddon, with OpenAI recently trying to secure 40% of the world's entire memory chip supply,

[1:18]causing RAM prices to surge 500%, and making everyday electronics more expensive for everyone.

[1:25]And then, The New Yorker published a bombshell investigation, based on interviews with over 100 people with first-hand knowledge of how Altman conducts business,

[1:34]along with never before seen internal memos, private diaries, and years of notes kept by Anthropic CEO Dario Amodei.

[1:41]What it reveals about the man running this company is unlike anything we've seen about a sitting tech CEO.

[1:47]All of this is happening as internal and external pressure grows. As of early 2026, Open AI is facing at least eight wrongful death lawsuits in total.

[1:56]Teenagers, young adults, a 40-year-old man in Colorado, an 83-year-old woman killed by her own son, after allegedly chat GBT validated his paranoid delusions for months.

[2:06]Altman's own sister has filed an amended lawsuit accusing him of sexual abuse, a case a federal judge just allowed to proceed.

[2:15]Open AI responded by allegedly pursuing the people asking questions.

[2:20]We started to see more examples of kids, frankly, dying by suicide as a result of these chatbots.

[2:26]Open AI even sent a demand for documents to the parents whose teen died by suicide after using chat GBT, who were pushing for regulation.

[2:34]They've been accused of subjugating numerous non-profit groups for criticizing them, demanding the names of every former employee, congressional office, and journalists they'd ever spoken to.

[2:43]One said he was served that demand at his front door.

[2:46]Tyler Johnston runs a small AI watchdog. Last summer he was out when he received an unusual text from his roommate.

[2:53]There's someone at the door with documents for you. It was a man with subpoenas from Open AI, the maker of chat GBT.

[2:59]When Open AI sub Tyler, it tucked in an expansive demand that hinted at its real agenda.

[3:06]They wanted every single document and text message and email that we had that in any way related to Open AI's restructuring.

[3:13]The money that's being spent is a message. It is saying that if anyone takes on this issue, they will be facing a wall of cash.

[3:21]According to an investigation by Vox journalist Kelsey Piper, if you work inside the company, you're subject to some of the stricted non-disclosure agreements in tech.

[3:29]They describe a regime where you will never for the rest of your life say a bad word about Open AI, and if you don't sign, you may lose your equity, which is potentially worth millions of dollars.

[3:40]And then there's the mysterious case of Suchir Balaji, who worked for Open AI for nearly four years.

[3:44]If this video is making you think about how much you actually trust the companies handling your information, your email is a good place to start.

[3:51]Which is why I want to tell you about our video sponsor, Proton Mail. Gmail is free, and you know why it's free? Because you're the product.

[3:58]Google scans your inbox to build an advertising profile on you. Every receipt, every flight confirmation, every doctor's appointment, every conversation with your mother,

[4:06]all of it feeding a machine that sells ads back to you, funneling all this targeted advertising at you.

[4:13]But Proton Mail works differently. Proton Mail uses zero access encryption, which means only you can read your emails.

[4:19]Not advertisers, not data brokers, not even Proton itself. It was built by a team of scientists who met at CERN, being based in Switzerland, which has some of the strongest privacy laws in the world.

[4:29]Which is why over 100 million people use it. Sir Tim Berners-Lee, who invented the World Wide Web, sits on their board.

[4:36]I highly recommend switching as it only takes about three minutes. And their migration tool transfers your emails, contacts, and calendar straight over from Gmail.

[4:45]Meaning you can keep everything, or just start fresh. And best of all, it's free to sign up. So make sure to check them out by using the link in the description below.

[4:56]A co-founder even said his contributions were essential and that the products wouldn't have succeeded without him, but soon he became disillusioned and quit.

[5:04]He said, quote, "If you believe what I believe, you have to just leave the company," he said during a recent series of interviews with The New York Times.

[5:07]Then he went public with an interview with The New York Times and was named as a potential witness in major copyright lawsuits.

[5:14]Then one month later, he was mysteriously found dead in a San Francisco apartment, where the San Francisco medical examiner ruled his death as self-inflicted.

[5:23]No criminal charges were filed against any individual or company, but his family disputes that finding.

[5:29]The, um, official cause of death is suicide. Um, you're his mother, clearly, you spoke to him a lot. Did you get any indication that he was depressed?

[5:39]No, he just came back from vacation. He had a very active lifestyle. When he when we had his memorial service, many of his friends came, and everyone gave an image like he's upbeat. He was very happy.

[5:54]He celebrated his 26th birthday on 21st November, the day before he died. If he's so brave to go to news media,

[6:00]to be a witness, he's so courageous. How does someone so courageous do a cowardly act? It doesn't add up at all. Can you believe it? 40 minutes to investigate.

[6:13]40 minutes to determine the date, cause of death, and at 4:00, they give me the keys for the apartment, and they say, you can collect the body tomorrow.

[6:22]Tucker then sat down with Sam Oltman himself to understand his perspective.

[6:27]So you've had you've had complaints from one programmer who said you guys are basically stealing people's stuff and not paying them and then he wound up murdered.

[6:34]What was that? Also a great tragedy. Uh, he committed suicide.

[6:38]No, he was definitely murdered, I think. Um, there were signs of a struggle. Of course, the surveillance camera, the wires had been cut.

[6:46]He had just ordered take out food, come back from a vacation with his friends, no indication at all that he was suicidal, no note,

[6:56]and no behavior. He had just spoken to a family member on the phone, and then he's found dead with blood in multiple rooms, and his mother claims he was murdered on your orders.

[7:06]I haven't done too many interviews where I've been accused of like Oh, I'm not accusing you at all.

[7:10]Elon Musk started the part by posting on X hours later, two words, he was murdered.

[7:15]Musk offered no evidence for the claim and is engaged in his own ongoing legal disputes with Open AI.

[7:20]His mother hired private investigators, commissioned a second autopsy from an independent forensic pathologist,

[7:26]and kept paying rent on the apartment for months so they could review the evidence.

[7:30]They explained finding a ton of unexplained things about the case, like your usual CCTV cameras unplugged, or conveniently not working.

[7:38]Blood in multiple rooms, a wig soaked in blood that wasn't his, injuries the official autopsy never mentioned.

[7:44]His computer was messed up, and a pen drive with potential evidence was missing.

[7:48]And how did he wind up bleeding in two rooms after shooting himself, and why was there a wig in the room that wasn't his?

[7:58]Altman had no answer for either question, and as it stands, there is no evidence of wrongdoing.

[8:03]But even so, Open AI has struggled to shake off this event and others, as this is a company that exists by its own charter to benefit all of humanity.

[8:11]That's the promise written here in black and white. But the descent recently took another turn when it started going places the entire industry said it never would go.

[8:20]In February 2026, the Pentagon was expanding a 200 million dollar contract with Anthropic, Open AI's biggest competitor.

[8:27]The military wanted to use their AI for all lawful purposes. Anthropic said, we have two red lines, no mass surveillance of Americans and no fully autonomous weapons.

[8:36]So, the Pentagon responded by removing them. Anthropic CEO wrote publicly that he cannot in good conscience agree.

[8:43]So the Pentagon designated Anthropic a supply chain risk. That label is normally reserved for companies linked to foreign adversaries like China, Russia, and Iran.

[8:52]And this was the first time in American history that had been used against a domestic company for refusing to cooperate.

[8:59]Trump then said on Truth Social that Anthropic was a radical left, woke company. And then he would direct every federal agency to immediately stop using their technology.

[9:08]And guess who popped up that same evening? It was Sam Oltman, who saw the opportunity to give the US government whatever dystopian technologies they wanted and embraced the occasion with open arms.

[9:18]Open AI says it has red lines too, but the full contract has never been published. What they've shown are snippets, and legal experts tore them apart within hours.

[9:27]Basically, Open AI wanted everyone to just trust them plugging in their AI into the Pentagon systems.

[9:32]Welcome back. Open AI facing some criticism over domestic surveillance concerns, as the company signs a new agreement with the Pentagon.

[9:40]More to the point, they still signs off Open AI models for classified military environments, when they were previously only used in unclassified environments.

[9:49]A former Army general counsel said he concluded that Open AI's anti-surveillance provision doesn't really exist, and they are just trying to fake it.

[9:55]One of Open AI's own researchers described the guard rails as not really operative except as window dressing.

[10:00]The contract relies on executive order 12,333, the same legal framework that the NSA has used for decades to justify mass surveillance of Americans by tapping communications outside of US borders.

[10:13]If your data crosses an international server, which it does every time you open your phone, it can fall outside the legal definition of domestic collection, even if you never leave your house.

[10:23]In the fall out, Open AI's head of robotics resigned within days, and hundreds of Open AI and Google employees signed an open letter calling for limits on AI and warfare.

[10:32]One of the most upvoted posts in the history of the Chat GPT subreddits told users to cancel their subscriptions stating, you are training a war machine.

[10:40]Conveniently, right after the Pentagon backlash, right before a likely IPO, Open AI's charity foundation suddenly announced it would grant out one billion dollars over the next year,

[10:50]focusing on mitigating the impact of AI on jobs and mental health. So, in a nutshell, Open AI's charitable arm is pledging money to fix the damage caused by Open AI's commercial arm.

[11:00]People reacted with a mass exodus from Chat GPT to Claude. A federal judge in California later ruled that the government wasn't truly trying to protect national security,

[11:08]but was trying to punish Anthropic for saying no, calling it a retaliation. Altman even admitted this all looked opportunistic and sloppy, and said they shouldn't have rushed to get this out on a Friday.

[11:20]But this wasn't sloppy in the slightest. It was two years in the making. And you see Open AI's usage policy once explicitly banned the use of its technology for military and warfare use.

[11:29]But then in January 2024, while Altman was speaking at the World Economic Forum in Davos, they commonly deleted it from the website.

[11:36]The Intersect caught them, and when asked, Open AI said the rewrite was to make things clearer. At the same time, while the weapons ban was still warm in the recycling bin, Altman told the room,

[11:46]I have a lot of empathy for the the general nervousness and discomfort of the world towards companies like us and, you know, our our the other people doing similar things, which is like,

[11:57]why is our future in their hands? And why why are they why are they doing this? Why do they get to do this? This is a technology that's clearly very powerful and that we we don't know we cannot say with certainty exactly what's going to happen.

[12:10]And that's the case with, you know, all all new major technological revolutions, but it's easy to imagine with this one that it's going to have like massive effects on the world and that it could go very wrong.

[12:20]He was preparing his company for military deals, while telling the world he understood their fear. And all of this, after admitting AI paralleled the risks of nuclear weaponry.

[12:30]Within days, Open AI then announced its first partnership with DARPA. Two years later, that huge Pentagon contract was signed at the first opportunity,

[12:38]with the title of the announcement reading, Our agreement with the Department of War.

[12:42]And despite all of that, the contracts, the controversies and billions pouring in, Open AI still can't keep its own products alive.

[12:49]The Register called them a product assassin after killing off Sora. But the truth is, they had no choice but to switch it off, as it was costing them an estimated 15 million dollars a day to run.

[12:59]That's seven times more to run per day than the 2.1 million the whole platform made in its entire lifetime.

[13:05]But there's a much longer list of dead products buried in Open AI's graveyard than people realize.

[13:10]When Chat GPT 4o came out, it was genuinely good, but they retired it within months. They promised developers plenty of notice,

[13:19]but the actual notice period was 15 days. It had become known as the only version of Chat GPT capable of natural conversation,

[13:24]and people posted obituaries for it because they'd built a rapport with it and learned how to use it effectively.

[13:30]One said that Chat GPT 4o's ability to produce working code dropped from 52% to 10% after a single update.

[13:37]Developers merely find out when their apps break, and users when their workflows stop working the same as before.

[13:42]It is widely theorized that the models are secretly nerfed to conserve resources when Open AI moves onto the next thing.

[13:48]And it also makes for great marketing to get people through the door. When a model grabs people's attention, they grab their subscription, and then who cares what happens next?

[13:56]And while Open AI is killing its own products, the community that once supported them is quietly walking away.

[14:02]The entire vibe coding movement has become almost entirely Claude centric. Anthropic's Claude Code is now widely considered the best AI coding tool available,

[14:11]with constant never-ending updates on X. Curse, Lava, and others, they all walk best with Claude.

[14:16]Even employees of Microsoft, Google, and Open AI itself are caught using it before Anthropic revoked their access.

[14:22]Switching to Claude is now even become a meme at the expense of Open AI. Somewhere along the way, the most hyped company in the history of technology became pretty much a government's contract with a chatbot,

[14:32]all while desperately trying to ruin your job.

[14:35]I think in the short term it will destroy a lot of jobs, uh in the long term, like every other technological revolution.

[14:43]I assume we will figure out completely new things to do. But none of this means that loosen their grip on society, it's actually the complete opposite. Because when you're burning this much cash with no clear path to profit,

[14:55]there's no option but to take bigger and bigger risks at the expense of everyone.

[14:59]Their next vision is a future where they sell you intelligence like a utility, monopolized and controlled by the ultra wealthy.

[15:06]We see a future where intelligence is a utility like electricity or water, and people buy it from us um on a meter and use it for whatever they want to use it for.

[15:20]The demand that we see for that seems like it's going to continue to just go like this.

[15:26]And if we don't have enough, we either can't sell it, or the price gets really high and it, you know, kind of goes to rich people.

[15:33]As Altman explains, if they don't get the unlimited money they need, only rich people will be intelligent.

[15:38]And it's even worse than it looks because he was talking to a guy who sits on Open AI's board. So, it was essentially a cozy chats between business partners about metering intelligence for profit.

[15:49]And they'll want your money to get there. It's widely reported that Open AI is planning an IPO at a valuation of 1 trillion dollars,

[15:56]which would immediately put them among the most valuable companies in the world. The trillion dollar valuation is roughly 40 times what the company actually earned,

[16:03]far higher than most companies at the height of the .com bubble. They're pulling in around 25 billion dollars in revenue now, which sounds enormous until you realize they lost around 8 billion dollars in 2025.

[16:15]And their own projections show it getting even worse. The plan is to hit profitability by 2029 or 2030.

[16:22]Most of Open AI's biggest investors are also their biggest suppliers anyway. Nvidia sells the chips, Amazon runs the cloud, and the money goes in one door and out the other.

[16:31]The pace and scale of artificial intelligence deals lately has been staggering. Open AI and AMD announcing a five-year AI data center deal.

[16:40]Open AI has signed a 300 billion dollar deal with Oracle for cloud computing power.

[16:45]Nvidia saying it's going to invest 100 billion dollars in Open AI. It's sparking a global tech rally.

[16:51]The IPO is one way to climb towards the next milestone. If AI really is going to take your job and change everything,

[16:58]then earning a piece of the company doing it feels like either a smart bet, or just a hedge against an AI driven future.

[17:05]Millions of people are going to mo buy on that basis, but the terms aren't the same for everyone.

[17:10]Before everyday people get anywhere near this stock, private equity firms are already in on a completely different deal.

[17:16]Open AI was allegedly offering them a guaranteed minimum return of 17.5%, more than double the standard rate.

[17:24]They get downside protection, seniority over every other investor, and early access to unreleased models.

[17:30]The public that Open AI was once committed to servicing gets none of that. An economist who studies bubbles says three or four warning signs are already flashing.

[17:39]The only one missing is a wave of IPOs. Goldman Sachs is now predicting the biggest IPO cycle the market will have ever seen because SpaceX and Anthropic could also list soon.

[17:48]So that last warning sign is definitely starting to flash. Now, at this point, you might be thinking, none of this affects me.

[17:54]I'm not buying the IPO, I don't use Chat GPT, I couldn't care less about any of this. And that is completely reasonable, except you're already in far deeper than you probably realize.

[18:03]The S&P 500 is already rewriting its rules to fast track Open AI, SpaceX, and Anthropic within days of going public,

[18:10]skipping the usual long waiting periods that exists to let a stock prove itself. If that happens, every fund tracking those indexes would be forced to buy that stock,

[18:18]because they have to own whatever's in the index, as would anyone personally invested in the S&P 500.

[18:25]401Ks, pensions and savings of all kinds would automatically absorb shares in companies that never made a penny of profit,

[18:33]at whatever inflated price they launched at. Even right now, just 10 companies already account for over 40% of the entire index, and almost all of them are heavily invested in AI.

[18:42]Nearly half of the S&P market cap has medium to high exposure to AI, according to City Group's estimates.

[18:50]So what that means is that a lot of these companies in the index have put big bets on the emerging technology that's set to come out.

[18:57]The question is, will those bets pay off? And so, many people aren't aware how their retirement portfolio performance or taxable account portfolio performance is really dependent upon the success of these five companies.

[19:10]It's important for investors to understand of the ETFs and mutual funds they have. What is my current exposure to AI?

[19:18]And if I buy stocks out of that, am I comfortable increasing my exposure in that significant of a way?

[19:25]The Bank of England warned about this back in December, saying that if the AI boom deflates, it's ordinary people's pensions and savings that ultimately suffer.

[19:32]Some of the biggest teacher pension funds have already started reducing their exposure. And besides pensions and savings, over the last few years, a huge number of just random companies and government systems have completely restructured themselves around AI.

[19:46]This dependency is part of Open AI's modus operandi because it creates long-term lock-in.

[19:51]It's a compounding situation where pension funds, governments, ordinary businesses, and an entire generation of startups are all now depending on the same small number of AI platforms.

[20:01]Platforms they don't own, can't see inside, and have absolutely no say over what happens next. There are people inside Open AI who know none of this is right.

[20:08]The problem is, they just keep on leaving the company. The co-founder and chief scientist Ilya, the man behind most of the actual breakthroughs,

[20:16]tried to eject Sam Altman from the board itself. Three days later, Altman was back, the board was gone, and he had left.

[20:23]The new board is all of your typical Silicon Valley and finance lineup, including the former co-CEO of Salesforce, the former Treasury Secretary,

[20:30]the ex-CTO of Facebook, and a former Gates Foundation CEO. What people don't know is what that looked like behind the scenes.

[20:38]According to The New Yorker, Sukever spent weeks secretly compiling 70 pages of Slack messages, HR documents, and photos taken on personal phones,

[20:47]all to avoid detection on company devices. He sent the final memos to board members as disappearing messages so no one else would ever see them.

[20:55]A board member who received them said that he was terrified. The memos began with a list headed, "Sam exhibits a consistent pattern of..." and the very first item on that list was a single word, lying.

[21:05]Altman was kicked out, and then returned with more power than he started with. And the head of the core safety team also resigned around the same time.

[21:13]He said publicly that Open AI's safety culture and processes have taken a backseat to shiny products.

[21:19]The dominoes have kept falling ever since. The super alignment team itself was completely shut down. The mission alignment team was shut down. Then the CTO left.

[21:28]The chief researcher officer left, the VP of research left, a co-founder left Anthropic, the AGI readiness adviser left, and nearly half of all safety researchers walked out.

[21:37]They then changed the mission statement six times. And the most recent version deleted the word safely. At the center of what remains, Altman stands pretty much alone.

[21:46]He told Congress under oath in 2023 that he had no equity in Open AI, and earned only 76,000 dollars a year.

[21:53]You make a lot of money, do you?

[21:56]I make no, uh, I'm paid enough for health insurance. I have no equity in Open AI.

[21:59]Really? That's interesting. You need a lawyer.

[22:03]I need a what? You need a lawyer or an agent. I'm doing this because I love it. Thank you, Mr. Chairman.

[22:10]Yet he'd say he'd never buy a Porsche, while also being photographed riding around in supercars worth millions of dollars, which hasn't gone un-criticized.

[22:18]Listen to Joe Rogan discuss it with Tom Segura here.

[22:21]Do you know uh Sam Altman? He's always the head of Open AI.

[22:25]Yeah. He's always kind of said, I'm not doing this for money. I'm not making any money. And they just busted him in a 4 million dollar Koenigsegg.

[22:31]They did? Yes. Oh, that's awesome. You see if you can find that car, the video of him in that car. Oh, I don't need money. Me, money. I'm not even interested in money.

[22:40]He's driving out of 4 million dollar Koenigsegg. Look at it. Go back to the beginning so you can see him get in it.

[22:46]Does it show him get in it? No. That's him with his fucking Yeah. Elon was mocking him. Look at that.

[22:55]Got a 4 million dollar car. There he is. Hi, busted. Hey, bro. Uh, I think you like money. Look, you don't buy one of those unless you really want to get rich.

[23:04]Yeah. You also want everyone to know how rich you are. You want everyone to know how rich you are. And you you're rich right now. Enough of I don't have a 4 million dollar car. I wouldn't drive with that.

[23:13]I'd be freaked out. I can't drive a car. I can't park it anywhere. What am I going to do? That's crazy. So that means he's got way more money than me.

[23:22]He's like, how much money do you get? How much money are you making? What are you doing that you're driving a 4 million dollar car? That's so crazy. And you're telling everybody you're not trying to make money.

[23:33]I don't. And the deeper you look, the deeper it gets. Multiple people interviewed for The New Yorker piece unprompted described Altman simply as sociopathic.

[23:42]A board member said, quote, "He has two traits almost never seen in the same person. A strong desire to please people in any given interaction, and an almost sociopathic lack of concern for the consequences of deceiving someone."

[23:54]Aaron Swartz, one of the main guys behind Reddit, a man we've spoken about in the past, as he's one of the most celebrated internet activists of his generation,

[24:03]even warned his friends this, quote, "You need to understand that Sam can never be trusted. He is a sociopath. He would do anything."

[24:10]When the board fired him in 2023, The New Yorker alleges they pressed him to acknowledge a pattern of deception.

[24:16]His response, according to people on the call, quote, "I can't change my personality." A board member's interpretation, "What it meant was, I have this trait where I lie to people, and I'm not going to stop."

[24:26]Meanwhile, The New Yorker demonstrates how safety commitments were pure theater. Open AI publicly pledged 20% of its computing power to the super alignment team,

[24:36]the team responsible for making sure AI doesn't go catastrophically wrong. The actual allocation was between 1 and 2%, and most of it was on the oldest hardware with the worst chips.

[24:46]When the team leader complained, he was told to stop pressing the point. One Microsoft executive told The New Yorker there's, quote, "a small but very real chance he's eventually remembered as a Bernie Madoff or Sam Bankman-Fried level scammer."

[24:58]Altman only recently compared training AI models to raising a human for 20 years, and to human evolution as a whole, as if it's a simple economic trade-off.

[25:06]One of the things that is always unfair in this comparison is people talk about how much energy it takes to train an AI model,

[25:15]relative to how much it costs a human to do one inference query. But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.

[25:26]And not only that, it took like the very widespread evolution of the hundred billion people that have ever lived and learn not to get eaten by predators and learn how to like figure out science and whatever to produce you.

[25:37]And then you took whatever you you know, you took. So the fair comparison is if you ask Chat GPT a question,

[25:43]how much energy does it take once its model is trained to answer that question versus a human? And probably AI has already caught up on an energy efficiency basis measured that way.

[25:53]All this time, he's been preparing for everyone's demise. He keeps guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.

[26:06]His backup plan is Peter Thiel's compound in New Zealand. He later said none of it would really help if AGI goes wrong, but he described preparing as "a fun hobby."

[26:14]And he's not alone in how he sees the future. The same applies to this entire class of AI leaders who collectively dominate the course of modern humanity.

[26:22]Karp, Palantir CEO said on CNBC that AI disrupts humanity's train, largely Democratic voters, and makes their economic power less.

[26:38]He wrote a book arguing that tech companies fusing with state power is how you save Western Civilization. Peter Thiel similarly said that competition is a relic of history,

[26:47]and that monopolies drive progress. These are the people deciding what the future looks like, and the thing that they all agree on is that the public doesn't get a say in whatever they decide.

[26:56]So Open AI is losing money, but it isn't losing its power. It's spending its money to secure dependency and will ask you for money next when it does its IPO.

[27:05]Every move it makes looks reckless on the surface until you realize it's all part of the same strategy. Move into every system imaginable, become impossible to remove, and then charge for dependency.

[27:15]That charter still says, benefits all of humanity. But the company that wrote it has deleted its own safety promises, emptied its own safety teams, and handed this technology with a free pass to the government, all while asking you to trust them.

[27:29]So at some point, you have to stop trusting and start watching what they do and be very wary of what they plan to do next.

Need another transcript?

Paste any YouTube URL to get a clean transcript in seconds.

Get a Transcript