Thumbnail for The Matrix is coming by AI Search

The Matrix is coming

AI Search

21m 30s3,691 words~19 min read
Auto-Generated

[0:00]The AI that you and I use everyday including Chat GPT and Gemini, they live in code like this. For example, here's the Python code for a Transformer model, and this code lives in a silicon chip. Specifically, a GPU or many GPUs in a data center like this. But here's a problem. These silicon chips are starting to run out of room. You see, transistors inside modern chips are already just a few nanometers wide, and they can't really get physically smaller or more efficient, so we're hitting a hard limit. But here's a wild idea. Instead of using silicon chips, what if we ran chips powered by living human brain cells? That sounds pretty crazy or sci-fi to some of you, but researchers have already recently built this. They built a chip powered by living human neurons and even taught it to play the video game Doom. Now, this is much bigger than just playing video games. These living brain chips could power the future of AI. They're way more energy efficient, they learn way faster, and they're also way better at controlling robots in the real world. So, in this video, we're going to go over the exact details of this chip, how they got it to play Doom, and what this means for the future of AI. As always, I'm going to break this down into simple terms so that even your grandma can understand. Let's jump right in. Now, at first glance, the idea of living brain chips feels like pure sci-fi horror, but in a strange way, it's actually the next logical step. You see, current AI systems are designed from neural networks, which are actually inspired by the brain. Our brains, for example, are also made up of biological neurons, which form a huge interconnected network. But there are key differences between AI systems and our brain. AI runs on software, which run on GPUs, and this is already quite inefficient. Plus, they are very rigid and deterministic. Think of billions of transistors firing on and off in very predictable ways. But biological neurons, like our brains, are very different. They're dynamic. They can reorganize themselves, and they can adapt and learn new things, plus they use way less energy than GPUs, which I'll talk about later in the video. So, researchers at cortical labs decided to take a very different approach. Instead of just making neural networks in software, what if we used real neurons? Well, last week, they did just that. Plus, they got it to successfully play the video game Doom. But before we talk about this, we have to go back to an earlier prototype that led to all of this, and that system was known as DishBrain. This is their original proof of concept developed around 2021 and 2022. In that experiment, researchers showed that clusters of human and mouse neurons could actually learn to play a simple arcade game, Pong, which was already a pretty wild result. But DishBrain was still very much a research prototype. It required a massive neural culture up to 1 million neurons. And even with that many cells, the training process was extremely slow. It took researchers roughly 18 months before the system showed any reliable learning behavior. Plus, it could only play Pong, which is a relatively simple game to learn, and the chip didn't really survive very well. It struggled with what they call capacitance issues. You see, when you get it to play Pong or get it to do other stuff, you're basically sending electric signals to these neural cells. If the signals aren't carefully balanced, the cells can become overstimulated. In extreme cases, this causes something called excitotoxicity, where the neurons are basically shocked to death. So it was quite a sad experiment, to be honest. DishBrain was a decent prototype, but it had many flaws. The entire thing needed a serious redesign. And that brings us to their latest system called CL1, which is way better. One of the biggest changes is the scale of the neural network itself. DishBrain required almost 1 million neurons to function, but the CL1 reduced that way down to just 200,000 neurons, while still maintaining stable learning behavior. That's a huge improvement in efficiency. Now, the real innovation isn't just the neuron count. It's keeping the thing alive. You see, in the human body, our brain cells survive because they're supported by entire biological systems. Our body helps deliver nutrients and oxygen to it, plus it also helps get rid of waste and carbon dioxide. Now, inside a computer, none of this exists, so we also need to artificially create all these life supporting processes. We need a way to constantly feed the neurons, nutrients and oxygen, and a way to remove and filter out its waste. So, they designed something called a microfluidic perfusion circuit. Think of it as like a tiny automated plumbing system which constantly delivers a nutrient rich liquid to the neurons. This liquid acts like a synthetic bloodstream delivering glucose and other nutrients the neurons need to function and stay alive. But just as important is the other half of the process, which is removing its poop. As neurons function, they also produce waste, and if this waste builds up in the fluid around them, it can quickly become toxic and actually kill the neurons. So the CL1 also has a microscopic filtration system that, as the name implies, constantly filters out waste produced by the neurons. You can think of it as performing the same job as a kidney. Now, there's even more you need to do to keep the cells alive. Temperature control has to be very precise. So the system has to keep everything at exactly 37 degrees Celsius, which is standard human body temperature. Even small changes in the temperature can mess up the neurons function. Next, the neurons also need to breathe. Neurons consume oxygen, and they produce carbon dioxide, just like all the other cells in your body. So to manage this, the CL1 also introduces an automated gas mixing system that carefully regulates the oxygen and carbon dioxide in the chamber. So with all these systems working together, nutrient delivery, waste filtration, temperature regulation, and gas control, the CL1 can keep its neurons alive and functioning for up to six months. Still quite short in my opinion, but much better than the earlier prototype DishBrain. What's even more impressive is the actual form factor. So the CL1 looks like this. It's completely self-contained. The pumps, the gas reservoirs, the heating, the computer interfaces, they're all housed in this thing. You don't need to hook up a separate external computer or some massive life support system. The entire thing is pretty small, and it's designed to fit in a standard server rack, just like any normal computer server. So maybe in the future, instead of walking into a data center with a ton of GPUs, imagine seeing rows and rows of these with living neurons powering the top AI models. It sounds wild, but it might actually be closer to reality than you'd expect. By now, you probably have a burning question in your head. Where do they get these living human brain cells? Well, they actually killed some humans and harvested their brains and extracted their brain cells. No, I'm just kidding. Fortunately, no humans were harmed in this process. The brain cells are created in the lab using a technique that won a Nobel Prize, so it's called human-induced pluripotent stem cells or iPSCs for short. In simple terms, it begins with a non-invasive cellular donation from a human, just like a skin swab or a blood draw. Now, from this, you get completely normal skin cells or blood cells. These are fully mature cells that already have a role in the body. But scientists discovered something interesting. When you expose these cells to some biochemical signals, especially transcription factors, you actually get to reprogram the cell. These chemical signals erase the cell's memory and reset it back to being a stem cell. It's like turning the cell into a blank slate. Specifically, this is called an induced pluripotent stem cell. And now you can reprogram this stem cell into any type of cell you want: heart cells, liver cells, muscle cells, or in our case, neurons. So, in a nutshell, that explains how they actually sourced these living human brain cells. Okay, now that we have the brain cells, how exactly do we arrange them on this chip? They aren't just floating randomly in this container, right? Instead, they're carefully placed into what is called a 2D monolayer, basically, a very thin sheet of cells arranged in a single flat layer. And this layer of neurons is then grown directly on top of a specialized silicon interface known as a high-density microelectrode array or HDMEA. And this is where live cells and the computer meet. This array is made of a substrate of glass and metal, and then embedded within it are microscopic electrodes. Because the neurons are grown flat across the surface, they end up directly touching these electrodes. In other words, the living cells are literally touching the hardware, and that allows this chip to function as a two-way communication device. First, it can stimulate the neurons. So, the system can send electrical pulses to activate certain regions of this living neural network. But it can also do the opposite. It can record natural electrical activity from the neurons themselves. Whenever a neuron fires, it produces a tiny electric signal, and the HDMEA can detect these spikes with extremely high precision. It captures changes in voltage at sub-millisecond resolution, and this is what turns this dish of cells into an actual interactive computational system. If you want to go beyond chatbots and actually get real work done with AI, you need to check out Manus, the sponsor of this video. Manus isn't just another AI chatbot; it's a proactive AI agent designed to deliver real, tangible results. Manus understands your goal, plans the steps, and then executes the entire task all autonomously. Here are some insane examples. I can upload this earnings report from Google, and then ask Manus to turn this into a professional presentation. Manus autonomously analyzes the attached PDF and extracts relevant information, and then autonomously prepares, designs, and generates the entire presentation for me. And here's what I got. This looks incredibly professional. All the data is correct, and this can already be used as the actual presentation. Very impressive. Or here's another example. I can upload this image of a very complex taxonomy tree and get Manus to turn this into an interactive graph. And afterwards, here is what I got. Everything is interactive, and I can move stuff around. What makes Manus different is that it can take action. It can autonomously browse the web, write and run code, create files, and interact with tools all autonomously. My favorite part is you can connect to different platforms like Gmail, Google Drive, GitHub, Slack, Notion, and more. This allows me to easily pass my existing files, folders, and media into Manus, so I don't have to click between different pages and then copy and paste files myself. Whether you're building projects, creating content, doing research, or automating complex workflows, Manus AI feels like having an extra operator on your team instead of another chat window. If you want AI that actually does things, definitely check out Manus. Click my link in the description below to get a thousand free credits. Now, to understand the scale of what's happening here, it helps to compare neuron counts. You see, the human brain contains roughly 86 billion neurons, whereas a fruit fly has roughly 100,000 neurons. But even this fly brain is capable of an insane amount of stuff. It can fly, walk, explore environments, find food, clean itself, avoid predators, and more. Now, the CL1 system has around 200 neurons, double what a fruit fly has. So this is roughly comparable to the brain of a complex insect, and they managed to get this to do something fairly complicated. Playing a 3D first person shooter game, but this leads us to the next obvious question, which is that neurons don't have eyes. They don't have hands, they don't have a controller. How exactly does this cluster of brain cells play the game Doom? The answer is that the entire game has to be translated into language the nervous system understands. In other words, the game has to be turned into electric signals. So let's track how this works step by step. First, the system runs the visual gameplay data through an encoder. It uses a technique called ray casting, which helps detect the geometry of the environment, including where the walls and enemies are. So it creates a highly simplified 3D map of the game environment. Next, it has to turn this geometric data into electric pulses that neurons can understand. So how does it do that? Well, the encoder actually varies the frequency, amplitude, and timing of the electric pulses to represent different elements of the gameplay. The pulses are then sent to certain electrodes touching the brain cells to stimulate them. This is a very rough example, but you can think of it like this. If an enemy appears on the left side of the screen, the system might stimulate electrodes that are touching the left side of the neural network. If the player hits a wall, a different pattern of pulses might activate different regions of this neural network. So a wall feels like one type of zap, an enemy feels like another type of zap, killing an enemy, losing, getting shot might feel like another type of zap. And once the neurons receive these signals, they can react by firing back. Remember, this HDMEA continuously records this firing activity up to 40,000 times per second. Now, the final step is translating this neural activity back into game controls. So they use a decoder to basically turn the electric signals from the neurons into specific commands in the game. For example, one firing pattern might mean turn right, another one might mean to move forward, and another one might mean to shoot. This entire process happens extremely fast. The gameplay is analyzed, converted into electric signals, injected into the neural network, and the cells process this and then send back its electric signals, which are then decoded back into commands, and then sent to the game engine, all in fractions of a millisecond. It's a seamless loop between biology and software. We now have a living neural network made of living human brain cells interacting with a video game. At this point, you should be able to understand the communication system, but that alone doesn't explain learning. Just because the neurons can interact with the game doesn't mean it'll actually try to play and win the game. Without some kind of incentive, the neurons could just fire randomly or do whatever it wants. So the challenge becomes, how do we motivate a dish of neurons to actually learn? Inside a living brain, learning is usually driven by chemical rewards. For example, when you do something beneficial, the brain releases stuff like dopamine, which reinforces that behavior. Or when you do something harmful, you feel pain, and the brain can learn not to do that again. But a dish of neurons doesn't have any of this. There's no dopamine system, no hormones, no pain. So the researchers need to design a completely different training method. It's based on something called the free energy principle. This suggests that biological systems from simple cells up to entire brains try to minimize surprise. Living systems prefer environments that are predictable and stable, and conversely, they don't like things to be chaotic or unpredictable, or else they would feel stressed. So the researchers came up with a brilliant way to use this idea to motivate neurons. Whenever it does a useful action in the game like moving through a corridor correctly or killing an enemy, the system delivers a smooth predictable electrical signal. From the neuron's perspective, this signal is stable and organized, which is what they prefer. But if they make a bad decision like running into a wall or getting shot, the system sends the opposite signal. It blasts the neurons with chaotic electric noise, which they absolutely hate, and the only way they can reduce that chaos is by changing itself. The neurons can change how they fire or even rewire their own connections, and this is how a living system of brain cells learns to adapt and play Doom properly. It's a brilliant idea, and it's pretty incredible to watch, but let's be clear here. These neurons aren't exactly e-sports champions. In fact, the gameplay is pretty rough. It's kind of like a baby or an insect trying to make sense of this new 3D game, which it has never seen before. But the real achievement isn't how it can play video games. This signifies something way bigger. You see, this living neural chip can learn incredibly fast. Within just a few hours of exposure, the neural network begins adapting to the rules of the game. It starts forming patterns, reacting to the environment, and gradually improving its responses. If you compare this with modern AI models, they take way longer to train. They usually need millions of training iterations and tons of training data before they start performing well. So using living neurons in chips could be a way for AI to learn way faster. Now, in addition to learning speed, here's something even more important: energy. Modern AI models require huge amounts of energy. For example, training GPT-3 required like almost 1300 megawatt-hours of electricity. That's enough to power a small town for an entire year. And that's just the training phase. Every time you use Chat GPT or Gemini, the data centers powering these models also consume substantial power. Because of AI, data centers are already approaching the limits of city power grids nearby. There's just not enough power to sustain this. And that's why the top AI labs are like racing to build massive new data centers around the world. Because as we scale up AI, it requires way more compute and way more power. But what if we used living neurons instead of silicon GPUs? You see, the human brain can do so much stuff. It's capable of seeing, smelling, talking, language, reasoning, thinking, running, jumping, dancing, riding a bike, driving a car, etc. Yet, your brain runs on only 20 watts of power, less energy than a normal light bulb. So biological neural networks are incredibly efficient. If we apply this into the chips that power AI, it's going to save us a lot of energy. Finally, here's another huge reason why these brain chips are such a big deal. And this brings us to something called the Moravec's paradox. This is the observation that tasks which humans find difficult, like playing chess, are actually quite easy for computers, and the reverse is also true. Tasks that humans find easy, like walking or riding a bike or picking up a fragile object, are extremely hard for robots. The reason is that real-world environments are messy and unpredictable. Robots have to calculate a ton of stuff in real time, like force, friction, balance, motion, and a ton of other stuff using their algorithms. But biological systems, our brains or just living neural cells have already evolved to deal with these unpredictable environments in real life. So they're incredibly good at handling physical motion. So with these living neuron chips, there's also a huge potential for them to be used in robotics or other physical AI systems. Finally, it's important to also bring up ethics into the discussion. Current AI models just live in software code and silicon chips. So they might not be conscious or sentient, although this is also a gray area. But what about the chip we're talking about today? This uses real living human brain cells. So in this case, is it conscious or sentient? Is it ethical to constantly zap these living neurons to play video games or force it to answer your prompt? That's where we kind of start to get into sci-fi horror territory. Is this a form of digital torture? Are they suffering in a never-ending simulation? According to the researchers at Cortical Labs, the CL1 chip should not be conscious or sentient because it's way too small. They argue that a real brain isn't just a collection of neurons. It's a very complex system with specialized structures. We have a central nervous system, sensory organs, and other systems responsible for things like emotion, memory, awareness, and pain. None of that exists in these cultures, and therefore, they claim that it's not conscious or sentient. But what happens if you scale this up to data centers? Imagine rows and rows of these living brain chips hooked up together in a massive data center. In that case, would it be big or complex enough to be considered conscious, and if so, would it then be considered unethical? Let me know in the comments what you think. Anyways, we covered a lot of ground today. Hopefully it was easy to understand, and hopefully you learned something interesting from this. As always, I will be on the lookout for the top AI news and tools to share with you. So if you enjoyed this video, remember to like, share, subscribe, and stay tuned for more content. Also, there's just so much happening in the world of AI every week. I can't possibly cover everything on my YouTube channel. So to really stay up to date with all that's going on in AI, be sure to subscribe to my free weekly newsletter. The link to that will be in the description below. Thanks for watching, and I'll see you in the next one.

Need another transcript?

Paste any YouTube URL to get a clean transcript in seconds.

Get a Transcript