Thumbnail for The INSANE Truth About OpenAI by MagnatesMedia

The INSANE Truth About OpenAI

MagnatesMedia

39m 37s5,488 words~28 min read
Auto-Generated

[0:00]You have an incredible amount of power. Why should we trust you? Um, you shouldn't trust me. Since launching ChatGPT, OpenAI has become one of the most influential and valuable tech companies in the world. But the story of what's going on behind the scenes is crazy. From trying to overthrow their CEO, to completely abandoning their original principles, this video is the insane history of OpenAI. But it's also a journey through the past, present and future of artificial intelligence. And this is a story that affects us all.

[0:50]Tell me the name of part one.

[1:00]Sam Altman studied computer science at Stanford, but he dropped out to work on his own business. It was called Loopt, and it was a way of sharing your location with friends using your phone. However, since Sam started Loopt before the iPhone and the App Store even existed, this proved to be a challenge. But Sam worked tirelessly on the business, mostly living off instant noodles and ice cream. In fact, he worked so hard and his diet was so poor that he actually developed scurvy. Then in 2005, he joined the first ever Y Combinator class, which is basically a bootcamp for startups. And it was here that Sam attracted the attention of Y Combinator's founder, Paul Graham. Paul had a lot of business experience and became a mentor to Sam, as the two got on extremely well. However, Paul also observed something curious about Sam. He said, Sam is extremely good at becoming powerful. You could parachute him into an island full of cannibals and come back in five years, and he'd be the king. Interestingly, before long, Sam ended up becoming president of Y Combinator. This meant at just 30 years old, Sam Altman was now leading the most prestigious startup accelerator, which allowed him to build relationships with many of Silicon Valley's most influential entrepreneurs. Sam became extremely well connected, and his reputation in the tech world grew massively. Sam soon began doing more public speeches, and one of his favorite topics to discuss was Artificial Intelligence.

[2:59]In 2015, Elon Musk was terrified. He was concerned about the lack of safety precautions around AI, which he felt posed an existential threat to humanity. I don't think most people understand just how quickly machine intelligence is advancing. I tried to convince people to slow down, slow down AI, to regulate AI. This was futile. At this point in 2015, Google was the undisputed leader in artificial intelligence. They'd been acquiring AI research labs and had roughly three-quarters of the top AI talent working for them. And yet, when Elon spoke with Google's CEO at the time, Larry Page, Larry didn't seem all that worried about AI. Elon asked him how he could be so sure superintelligence wouldn't wipe out humanity, but Larry dismissed it completely, saying Elon was being way too paranoid.

[4:09]Just one company that has close to monopoly on AI talent and uh and computers uh like a scaled computing and the person who's in charge doesn't seem to care about safety.

[4:24]This is not good. So Elon desperately felt he needed to dilute Google's power, and that would lead him to a partnership with a man named Sam Altman.

[4:50]In 2015, ten influential people in the tech world met for dinner. Elon Musk and Sam Altman were both there, so was Greg Brockman, who'd been influential in growing Stripe, and also Ilya Sutskever, who was one of the most respected researchers in AI. At the dinner, they all talked very seriously about artificial intelligence and its potential consequences, and they discussed how they could build an AI company together to rival Google. Elon said he would put forward a billion dollars of funding. And so they figured with Elon's investment money, Greg's business operation experience, Ilya's AI skills, and Sam to orchestrate everything, they would have the dream team. And thus, OpenAI was founded in 2015 as a non-profit organization. The reason was that they said having a profit motivation with a technology like this could be very dangerous, and instead it should be built for the good of the world. What's the furthest thing from Google would be like a non-profit uh that is fully open because Google was closed for profit. So that's why the open in OpenAI refers to open source. We don't want this to be sort of a profit maximizing demon from hell. In their own words, OpenAI's objective was to build AI safely for the benefit of humanity. And they would share their work openly with the public for free, instead of keeping it private for their own gain. Hence the name, OpenAI. However, very soon that would all change.

[6:45]When OpenAI began in 2015, it did not look like a world-changing company. They didn't even have an office. It was just a small group working from Greg Brockman's apartment. We're sitting essentially on a couch, at a kitchen counter, and on a bed, and that's pretty much that's where that work is getting done.

[7:10]However, they had over a billion dollars pledged from various investors, and this funding meant they could attract top AI researchers very quickly. And so, they began by just experimenting. Unfortunately, they didn't really have a clear strategy. They spent a lot of their time building a bot that could play the popular game Dota 2. They figured if they could build an AI that understood the complexity of the game world, it could lead to an AI that better understood our world. Then another project they did, involved trying to build a robot butler. One of OpenAI's early employees even admitted, "We were just doing random stuff and seeing what would happen." Now, it's worth noting that at this point, neither Sam nor Elon was around much. Sam was still running Y Combinator, and Elon had his other businesses.

[8:03]Instead, OpenAI was led by Ilya, who was considered to be an AI genius, and Greg, who was considered an expert at managing business operations. But what the whole team did have was a shared vision of creating AGI, Artificial General Intelligence. There are different definitions of this, but it often means artificial intelligence that can match or surpass human capabilities in most tasks.

[8:30]But here's the key: AGI should be able to acquire new skills it wasn't even trained on, which is why some researchers say, "The first superintelligent machine is the last invention humanity needs." Because if we can build a machine smarter than us, that can build even better machines, beyond what humans can even think of. So, inside OpenAI, the employees talked about AGI as though they were building God. Even in the very early days, the OpenAI team were talking about how they wanted to build something that could completely change the world. Because AI will solve all the problems that we have today. It will solve employment, it will solve disease, it will solve poverty. But it will also create new problems. The problem of fake news is going to be a million times worse. Cyber attacks will become much more extreme. We will have totally automated AI weapons. Many of the OpenAI team shared a similar feeling, that what they were building had the potential to be the greatest invention ever, but they also said it could be the greatest threat to the existence of humanity. It's not that it's going to actively hate humans and want to harm them, but it is going to be too powerful. And I think a good analogy would be the way humans treat animals. It's not that we hate animals, but when the time comes to build a highway between two cities, we are not asking the animals for permission. A very simple example would be if we did create superintelligence and asked it to help Earth, there could be unintended consequences, like the AI deciding the planet would be a lot safer without humans. Sam Altman was saying similar things too. On the one hand, he would publicly say that with AI, "We can cure all human disease. We can build new realities." And then he would also say, "If this technology goes wrong, it can go quite wrong. I think AI will probably like, most likely sort of lead to the end of the world, but in the meantime, uh there will be great companies." What's kind of funny is that they were saying this stuff back when they were making AI bots for video games. So most people didn't take them too seriously back then. In fact, many felt the idea of AI becoming powerful enough to threaten humans was laughable. As at this point, AI still felt quite primitive. The cynical explanation was that the OpenAI team were talking about AI saving humanity or wiping out the human race because it helped attract publicity and investors. But it does seem most of them genuinely believed what they were saying about the potential power of this technology, and they weren't the only ones. Vladimir Putin infamously said, "Whoever wins the AI race, will be the ruler of the world."

[11:46]Back in 2015, when OpenAI began, the reason most people weren't paying much attention to AI, was that the field of artificial intelligence had seen decades of slow progress. It had been dubbed the AI winter, as after a lot of initial hype, there'd been a lot less breakthroughs than expected. And so funding dried up, and many researchers moved on.

[12:09]But in 2015, the same year OpenAI began, for the first time an AI program beat a professional Go player, which is a complex strategy game. And the AI would later go on to beat the world's best human player. This was exciting news, but it also illustrated the problem with current AI technology.

[12:32]That AI could only play Go, and nothing else. Which meant if you wanted the AI to do any other task, like write a story or calculate an equation, you'd have to build and train a whole new system for that one task, which was extremely time-consuming. This was largely because the training data you fed the AI had to be clearly labeled to explain what it was, like Hotdog or Not Hotdog. The TV show Silicon Valley parodied this perfectly. If you trained an AI on enough specific data, an AI could tell if an image was a Hotdog or not a Hotdog, but it had no concept of anything else. Basically, machines could do one thing well if trained, but it was very narrowly focused. Whereas human brains were special, because they could do so many different things, which was dubbed General Intelligence. However, in 2017, a team of scientists working at Google published a paper that would change everything.

[13:45]A small team of engineers at Google published a paper called "Attention is All You Need," and they put forward a new type of AI architecture, known as the Transformer. Unlike previous AI systems that needed to be fed highly specific data that was all labeled clearly to explain what the data was, like Hotdog or Not Hotdog, the Transformer was different. It could take in random, messy, unlabeled data, and essentially teach itself, and it worked surprisingly well. What's interesting is that it was engineers at Google, not OpenAI, who made this initial breakthrough with the Transformer. But Google had become so big that they were very slow and cautious, so even though they developed their own AI chatbot before OpenAI, they didn't release it to the public. They worried it could make outlandish comments that hurt Google's reputation and opened up legal and regulatory risks. Most crucially, Google worried it could hurt their search advertising business, which funded everything Google did. In hindsight, Google's decision to move so slowly turned out to be a grave mistake, as they left the door open for OpenAI to capitalize on Google's invention instead. Ilya read this research paper about the Transformer and immediately saw its potential. As a result, OpenAI became one of the first companies to seriously start experimenting with this technology. That's where the famous GPT acronym comes from: Generative Pre-trained Transformer. The Transformer could handle far more data and process human language much faster. Most importantly, it could handle pretty much any query, meaning they were more general. So, thanks to the Transformer, OpenAI suddenly started making huge progress very quickly.

[15:40]Now, you may wonder, why didn't the AI researchers in the 20th century come up with this? Well, they were limited by a lack of compute power and the lack of the Internet. You see, even though this wasn't the Internet's purpose, the Internet had become the perfect training data for an AI. Because basically every book and article had become digitized, and humans create endless amounts of content. And so basically everything humans had ever written online could now be fed into these AI models as training data for neural networks. It's been described as a black box, as we don't fully know how they do what they do. We just add input and receive output. But all of this meant that instead of the old AI models that had been trained to do one specific task, these new AI Transformer models became extremely broad and general. They'd basically been trained on all text available, so you could ask it anything. However, this method of just scraping the Internet as training data, raises obvious copyright concerns. But the Silicon Valley ethos has always been to ask forgiveness, not permission. OpenAI knew if they started asking big companies if they could scrape all their data, of course there'd be lots of pushback and discussions about royalties. So, OpenAI just went and did it. And it worked so well that by the time OpenAI got to their GPT-2 language model, they started to become worried. They feared that what they were building was so powerful that if they freely shared the open-source code for it, like they'd promised all along, in the wrong hands, this could become very dangerous. So, OpenAI announced, "Due to our concerns about malicious applications of the technology, we are not releasing the trained model." This actually helped generate a huge amount of publicity and hype for them. For example, Wired Magazine published an article called "The AI Text Generator That's Too Dangerous to Make Public." OpenAI also didn't disclose what data sets had been used to train it. It started to become clear that OpenAI maybe didn't want to be so open after all. On this channel, I cover all kinds of entrepreneur stories. But have you ever thought about starting your own business? If so, today's sponsor Bizzi offers a free LLC formation service. You just pay the state fees, and they'll get your business incorporated and handle all the paperwork. When I started my business, I remember it was quite overwhelming. But Bizzi makes the process so much simpler. For example, Bizzi can take care of the ongoing filing requirements with your state, which is a huge time-saver. And Bizzi can even provide you with a professional address, with digital mail scanning, giving you access to your mail from anywhere, anytime. Basically, Bizzi gives entrepreneurs the tools to start and manage their business, which means you can focus on making money. With over 20 years experience, they've already helped over a million entrepreneurs, and I've personally found them great to work with. So, if you want to start your own company, I honestly think Bizzi makes it so much easier. Just use my link in the description to get started today.

[19:12]In 2018, Elon announced he was resigning from OpenAI's Board of Directors. Publicly, they said this was due to a conflict of interests, as he was CEO of Tesla, which was developing its own AI. But the truth was very different. Elon had wanted to take over OpenAI and become the CEO. He also proposed OpenAI becoming part of Tesla, but the board had refused, and that's why he was now leaving. Unfortunately for OpenAI, this meant he was also taking his investment money with him. He had pledged one billion dollars in total, but it's believed less than a hundred million had been paid so far, and now OpenAI wouldn't be getting the rest. This left OpenAI with a huge funding problem, and so, many employees were extremely worried about what this meant for the future of the business.

[20:06]And this is where OpenAI made a very controversial decision. In order to increase their ability to raise more capital and attract investors, OpenAI decided to switch from being a non-profit to a for-profit business instead. They also announced they'd be licensing their technology for commercial use. Now, it's important to note that there's obviously nothing inherently wrong with being a company that makes a profit. And Sam argued this was necessary to raise more investment, plus they said the profit investors could make was capped at a hundred times the investment they made. However, many felt this was a complete betrayal of the whole reason they'd started the company. Not just that, but OpenAI soon formed a partnership with Microsoft, who agreed to invest a billion dollars. Microsoft had lots of raw computing power, which OpenAI needed, so the deal made sense for both sides. But OpenAI's mission had been to provide an alternative to Big Tech, and now they were going to help one of the world's most powerful tech companies become more powerful. So, many would argue OpenAI had completely backtracked on its mission for democratizing AI. Remember, the name OpenAI was chosen, as it was meant to be open source and freely owned by the world. And yet, as soon as they had an actually powerful product, they didn't want to be open, but instead would become increasingly secretive. Elon would later sue the company, demanding they change their name from OpenAI to ClosedAI. After transitioning to a for-profit company, they also now needed an official CEO, and it was Sam Altman who got the role. Suddenly, Paul Graham's comments about Sam being good at getting into positions of power, seemed more relevant than ever.

[22:03]In 2020, OpenAI unveiled GPT-3, a language model trained on massive Internet data sets.

[22:12]But the real groundbreaking moment was on the 30th of November, 2022, when OpenAI publicly released its chatbot ChatGPT. At this point, most of the general public had never heard of OpenAI, so there was no big fanfare. It started with Sam making a simple tweet, saying, "Today we launched ChatGPT, try talking with it here." What's interesting is that OpenAI's leadership said this was just a low-key research preview, and so expectations were low. Employees took bets on how many users they'd get in the first week, and the highest guess was a hundred thousand users. In reality, they were completely wrong. ChatGPT went viral, and rapidly captured the attention of the world. In just two months, it became the fastest app to reach a hundred million users. For context, it had taken Facebook four and a half years to hit the same milestone. This unexpected growth was of course very exciting for OpenAI. But what's interesting is that internally at the company, there'd been some employees uncomfortable releasing ChatGPT so quickly. And some of the safety team weren't even aware it was going to be released. They argued that they didn't know how it might be misused by the public. There were obvious risks like hackers using it to find vulnerabilities in code, or people using it to help them commit crimes. But there was no way of knowing quite what would happen when the general public started interacting with ChatGPT. Plus, the team were aware it made a lot of factual errors, which were dubbed hallucinations. Still, despite concerns, ChatGPT went live, and the response from the public was extremely positive. Everyone had tried chatbots before, but they always felt extremely robotic. If you didn't ask your question in the right way, they were useless, as they were basically giving pre-programmed answers. But ChatGPT felt much more knowledgeable and conversational. Within a couple of months of ChatGPT being released, Microsoft increased its ownership stake in OpenAI with a new ten billion dollar investment. ChatGPT's release also caused the AI race to really ramp up. Investors began throwing more money at AI projects than ever before, and the big tech companies all scrambled to release their own AI models to compete. OpenAI seemed to be leading though. They continued to release new products and became even more commercialized with paid plans and selling their underlying technology to businesses. However, at OpenAI, a divide was growing between those focused on product versus those focused on safety. OpenAI's star engineer, Ilya Sutskever, seemed to grow concerned, and began working more closely with the company's safety team. Then, a group of nine current and former OpenAI employees accused the company of prioritizing profits over safety, and said OpenAI used restrictive agreements to silence safety concerns.

[25:17]One of their key safety researchers quit for the same reason. Meanwhile, several of OpenAI's lead developers left to start a rival company called Anthropic with the goal to build a safer AI alternative. But these internal conflicts didn't seem to slow OpenAI down. Not only was ChatGPT taking the world by storm, but OpenAI's progress with DALL-E, their image generator, and Sora, their video generator, were also incredible developments. In fact, Sam's vision of what AI could achieve only seemed to get bigger. He was publicly talking about how what they're building would create a world of abundance and could help end poverty and disease. At a major gathering of world leaders, Sam said, "I think this will be the most transformative and beneficial technology humanity has yet invented." He went on to say how there is nothing else he would rather be working on. But then came one of the most shocking twists in recent business history. Less than 24 hours after giving that speech, Sam Altman was fired.

[26:37]Sam Altman got a text from Ilya, asking him to join a video call. Sam wasn't sure what it was about, and so he was very surprised to find that the OpenAI board was on the call, except for his friend Greg Brockman. The board told Sam they were firing him. The call ended shortly after, and Sam was locked out of his OpenAI computer. It's hard to overstate what a shock this was to everyone, including Sam. He was the face of OpenAI, one of the most exciting companies in the world, and now he was being kicked out. It was Friday, November 17th, 2023, when Sam was told the news. And the OpenAI board put out a statement, saying they had removed co-founder Sam Altman as CEO. The board's statement explained that Sam was not consistently candid with his communications with the board, basically implying Sam had lied to them, and they couldn't trust him. The board then said they no longer had confidence in Altman's ability to continue leading OpenAI. It was all kind of vague, so pretty much everyone was asking, "What really happened here?" And there were many different theories about why he'd been fired. The board's main accusation was that Sam had a habit of outright lying to them, and so they often felt they couldn't trust what he said. For example, Sam said things had been approved by their internal safety board, which actually hadn't been approved. It's also believed Sam had fallen out with one of the board members, Helen Toner, after she published a paper where she basically suggested their competitor Anthropic was safer than ChatGPT. Apparently, Sam had taken issue with that and tried to get Helen kicked off the board. He reportedly told the other board members that everyone else agreed Helen should be fired, even though they hadn't said that. The board members began to feel that he was playing them against each other. Another theory about why the OpenAI Board turned on Sam is that they had become annoyed that Sam seemed to be building a whole empire of different projects outside of OpenAI. The board felt this was a major distraction. For example, Sam had been trying to raise tens of billions of dollars from the Middle East to fund an AI chipmaking business. He'd also started a project called Worldcoin, a crypto-based network that would give everyone worldwide a unique digital identity by scanning their eyes. Sam believed that AI may put huge amounts of people out of work, and thus, they'd need a way to distribute a Universal Basic Income. But some people think the board was concerned about how Sam wanted to use OpenAI's technology for all his own separate projects. However, there's also another very prominent theory about why the board really turned on Sam, and it's the most concerning one. Some believe that the reason Ilya turned on Sam was because he saw something internally that made him worried. Ilya had been heavily involved in safety at OpenAI, and for him to suddenly flip on his co-founder was concerning. It became a meme on social media, with people asking, "What did Ilya see?" Still, no matter the exact reasons the board had, the fact was that on Friday, November 17th, Sam was kicked out of OpenAI. And it was announced they'd begin looking for a new CEO. But by Saturday, something very unexpected started happening. OpenAI employees began revolting against the board. It began when co-founder Greg Brockman resigned from the company in solidarity with Sam. But over the weekend, more and more OpenAI employees came forward in support of Sam. A letter was drafted by OpenAI staff, saying they disagreed with the board's decision to fire Sam. And they complained the board hadn't given adequate explanation for why. A petition was then created by employees to say that they would leave the company if Sam wasn't brought back. And nearly all of the company's 800 employees signed it, to say they didn't want to work there unless Sam and Greg were brought back. Social media became flooded with posts from employees, saying, "OpenAI is nothing without its people." Sam was replying to each one individually with heart emojis. Now, it's worth noting that part of this support probably wasn't just out of loyalty. You see, right before this coup happened, OpenAI had been planning a share sale for employees, which would mean a big cash payout for staff. But all this chaos going on, would have probably destroyed the chances of that happening, and massively hurt OpenAI's valuation. So, keeping Sam in place was probably in the employees' best financial interests. But either way, the OpenAI board basically had a mass mutiny on its hands. They had never expected the staff to be so loyal to Sam. And to make matters worse for them, Microsoft announced Sam and Greg were join them instead. And that they would hire any OpenAI employees who wanted to leave. At first, the board still tried to press on with their plan, and began lining up a new CEO to replace Sam. But then on Monday morning came the final dagger. Ilya, one of the four board members who had initially pushed Sam out, changed his mind and signed the petition, saying he wanted Sam to stay. Ilya tweeted, "I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together, and I will do everything I can to reunite the company." Now, it's unclear if he genuinely felt that way, or he had just realized that all the other employees were clearly siding with Sam. But Ilya siding with Sam was the point that the remaining board members realized they had lost the battle. If you come for the king, you best not miss, and the board had missed.

[32:47]By Tuesday, a deal was reached. The board members who turned on Sam would be kicked off the board, and new board members would be brought in. Sam would return as CEO, and Greg would come back too. Greg posted, saying, "We are so back." And they held a company-wide party at the office. It truly was a party atmosphere. What's really crazy is that all of this happened in just five days. On Friday, Sam was ambushed and thrown out, by Tuesday, he was back as CEO. His enemies on the board all gone, and the entire company had declared loyalty to him. Not just that, Sam would be involved in choosing the new board members, meaning he was in a stronger position than ever before. And the odds of anyone challenging him again now, seemed extremely low. As for Ilya, he lost his board seat, and ended up leaving OpenAI completely, just six months later. However, what's really fascinating about all this is that before the board tried to overthrow Sam, he had repeatedly talked about how it was only fair that the board should be able to fire him and hold him accountable. The board can fire me. I think that's important. Sam had repeated this a lot. And yet, when the board did try to fire him, it turns out they couldn't.

[35:58]A few years ago, it was widely believed that AI would come for manual jobs first, and creative jobs would be the last to go. As surely machines can't be creative. Turns out though, it's the creative industries AI is disrupting first. Likewise, a few years ago, it was hard to imagine anyone disrupting Google's dominance in search. But now, many people ask their questions to ChatGPT instead. The reality is, every industry is likely to be disrupted by AI in some way. Many will use it as a tool to help them, many will get replaced. If you want an example of how fast AI is progressing, here's the real twist. This entire video was created by AI.

[36:47]Nah, I'm just kidding, none of this was AI. For the record, not a single line in any Magnetes Media video has ever been written by AI. These videos take a long time to make because I research and write the scripts, record the voiceover, and then spend hundreds of hours editing them. But the fact you can't always tell on YouTube anymore is kind of crazy. It was already hard to trust what you see online. But as it becomes easier than ever to create images, videos and audio of people saying and doing whatever you want, it'll be harder than ever to trust news. Or even trust that who you're speaking with is human. Social media is already filled with AI accounts.

[37:30]What's perhaps most interesting, though, is how AI will mix with other technology. As tech like virtual reality improves, we'll probably get to a point where people essentially spend more time in a virtual world than the real world, kinda like Ready Player One. Imagine being able to enter an ultra-realistic world indistinguishable from reality, but you can go wherever you want, be whoever you want, and do whatever you want. It's not hard to see how someone who's unhappy with their real life would want to switch to a virtual world, powered by AI that can generate whatever they desire. And it'll feel real. Unless, of course, we're already inside that world right now.

[38:11]So, where are we up to? As of 2025, OpenAI had closed a new funding round of 40 billion dollars, at a 300 billion valuation, making it the largest private tech funding round in history. Meanwhile, Elon Musk and Sam Altman, who started as co-founders, are now competitors. They continue to publicly argue, throwing insults back and forth. However, in January 2025, all the American tech companies got shaken by the arrival of DeepMind, a Chinese AI competitor. Hundreds of billions of dollars were wiped off the market caps of US AI stocks, as it became clear China was a serious player in the AI race. OpenAI actually accused DeepMind of stealing its intellectual property, which a lot of people mocked. Since you could argue OpenAI essentially stole that intellectual property in the first place by scraping the Internet. But either way, what is clear is that the AI race is heating up. And whilst nobody knows for sure how it will play out, it's fair to say the results will impact us all. Now OpenAI's latest funding round was led by SoftBank. And if you want to see what happened last time SoftBank pumped tens of billions into a company, click here to watch the story of the 47 billion dollar cult. Trust me, it's a crazy one. I'll see you there. Cheers.

Need another transcript?

Paste any YouTube URL to get a clean transcript in seconds.

Get a Transcript