[0:00]Behind the scenes on a YouTube channel, creators have access to a dashboard where we can upload videos and check in on how they're going. On the content page, right at the top, before videos or shorts, there is a tab called Inspiration. It's been around for a little while now, and it contains AI generated suggestions for future video ideas. I usually avoid it because it's pretty bad, but this week I happen to take a look, and what I saw there was so hilarious and scary that I want to share it with you. Welcome to the world of physics slop. These hypothetical videos include, what happens if you touch time with your head? A deep dive. Trying to learn arithmetic using only the world's most hated history books. Why Euler considered the identity formula to be a proof of God? These suggestions are AI generated, both the titles and the thumbnails. YouTube says that it's based on data from my YouTube channel, such as recent videos that I've made, and I do see their influence in here. But it's also based on things like comments from viewers, or titles that my viewers are watching on other channels. This is how YouTube sees me, or at least the data that's driving the back end. Clicking on a video shows why it's a suggestion. The touching things with your head seems to be a common theme. This one is Planck Time: The Smallest Moment Your Head Could Ever Touch? Lots of my viewers watched similar content in the last 28 days. And that related content includes the scene from a movie involving a literal plank. But then we have things like this: The forgotten relativity experiment that almost rewrote our understanding of cosmic time. And this is where I think this tool starts to get a bit sloppy in a dangerous way. This proposed video is about the 1920s Dayton Miller experiments that claimed to prove that there is an ether that light travels through. And this was after the Michelson Morley experiments famously showed that there wasn't. Later analysis concluded that Miller's experiments did not show a statistical significance for the existence of an ether, and the signals that he had seen were likely due to temperature fluctuations or even confirmation bias from Miller wanting to find an ether. Because the lack of an ether supports Einstein's theory of special relativity, this story contains conflict between Einstein and Miller, the underdog. And that helps make this story ripe feeding ground for conspiracies. It's got the appeal of hidden truth, suppressed information, and an underdog versus a prominent scientific figure. The thumbnail suggestion also feeds into that vibe with a mysterious golden object. And I agree that this video could be a good idea. But when I search for this topic on YouTube, I see other channels that have perhaps taken this suggestion and run with it. And there's plenty of people taking this idea in a direction that is perhaps away from what the science and the history says about it. We have titles like the scientist who proved the ether exists. And this podcast with two AIs talking to each other, that says AIs discuss the ether wind, and it contains a lot of claims in support of Dayton Miller. Looking into this even just briefly, I accidentally stumbled upon a community of people advocating to make ether great again, and even claiming that it is hidden information suppressed by the government. This topic suggestion is just way too close to pseudoscience that I wouldn't want to tackle it without being really careful. The AI perhaps knows though that leaning into that sensational part of it would lead to more views. Here's another suggestion: Debunking the myth of the black hole at our galaxy's center using data. This one deserves a double take because I thought there was a black hole at the galaxy center, called Sagittarius A star, the one that we took a photo of back in 2022. Unless I missed something, this video title is just false, or maybe badly worded. The suggested description says that Sagittarius A star isn't a hungry, infinite void, but a precise mathematical solution. I think this suggestion is just a bit confused. Yes, there's mathematics behind the theory of black holes, but that doesn't make them a myth. And that makes it a bit worse than this other suggestion: Mapping the topology of a black hole through 17th century poetic metaphors. And I know YouTube isn't promising that these ideas are correct. There's even a disclaimer. They're only intended for us to use as inspiration. We still as creators need to be the filter. That is pretty obvious, but some of these suggestions do reveal deeper truths about science communication. Social media already perhaps partially by accident, incentivizes simple stories that confirm our existing biases. Things we find compelling to believe, such as elites hiding information, or scientists being wrong. Or something that most people think is good, turning out to actually be bad. These offer the chance for a viewer to perhaps feel smarter than others, or as part of a smaller, more exclusive club of people with the same beliefs. The appeal of stories like this are behind some of the more harmful pseudoscience out there, and these AI suggestions have managed to capture some of those biases. Videos like examining the bizarre mathematical theories found in Lewis Carroll's private diary. Or The secret physics diagrams found in Marie Curie's original laboratory notebooks. Or Reviewing Richard Feynman's personal sketches of waveform aesthetics and unseen geometries. These sound innocent, but I just think there isn't anything there unless you make stuff up. I don't think Feynman's drawings of unseen geometries actually exist. But these suggestions almost invite creators to fill in the blanks. You could claim almost anything about these secret notes, because they don't exist, no one's read them and can refute you. And unless you have a reputation on the line, you might be unlikely to even get fact checked. It's the same with what was inside Nikola Tesla's suitcase? Analyzing his final scientific research notes. Is there any verified information about what was in Tesla's suitcase? Interesting if true, but I've been working on a video about Tesla and I haven't heard of this suitcase. And the thing is, I actually think YouTube are probably right on this. A video with this title does have a good shot at getting lots of views. On the topic of Tesla, there's a quote commonly attributed to him, regarding his fights with other people who apparently stole his work. And it goes, I don't care that they stole my idea, I care that they don't have any of their own. And maybe that's how I feel about this use of AI. There's something honorable about coming up with your own ideas, through human inspiration or discussions with other people. It's a way to let your audience know that this is a topic you find personally interesting and valuable. Unfortunately, I couldn't find a verified source for that Tesla quote, so it's likely it might not even be from him. Maybe soon YouTube or another platform might just make all of these videos themselves, and they don't need humans to be uploading them. Imagine a feed where you see all of these videos made in full by AI, because the platform knows you find them interesting. I can imagine watching a lot of these videos and maybe feeling like I'm learning something. Is there any incentive for a platform to not post these as videos? It might just be our willingness to interact with them. Fact checking does exist as a part of these models, and I imagine these ideas have already been through some degree of fact checking. It is perhaps not that good right now, but I would expect AI fact checking to get better. Though there are some pretty meta suggestions here, such as why the peer review system is failing to catch sophisticated AI generated research. And if that really is going to be a big problem, and AI fact checking relies on the results published in these journals, then it might become AI all the way down. I do think that the way AI works is scientifically interesting, and it can have good and useful applications. But the use that we see of it here doesn't seem to align with good intentions. As with any technology, it is about how we use it. And I wonder, is this use actively harmful? Whilst the tool might generate funny suggestions for other channels, for channels about science or medicine, it could be potentially dangerous. Whilst a creator should obviously take these ideas with a grain of salt and filter them out, not every channel is going to filter them to the same degree. Especially not when uploading a large quantity of videos is a good way to grow a channel. Slop channels are absolutely already making these videos, and some of these suggestions are actually good. The most bizarre college entrance exam and hidden mathematical secrets in Escher's Prints, are both videos I've basically made in the past, but these are actually probably better title suggestions than what I used. This collection of suggestions that I'm showing you are ones that I'm picking because I find them especially notable. So this sample of mine does have a bias to it, and they're not all this bad, many are quite benign. There is one theme of suggestions that I find absolutely hilarious, but first, here's a couple of highly commended ideas. We've got Visualizing Two Dimensional Objects Using Complex Fractal Equations, which I find funny because I don't think visualizing a two-dimensional object is that hard. It's obviously come from all my videos about visualizing four-dimensional objects, which does take a lot of work. But you can visualize a square or a circle without any equations at all. I guess the intention was maybe to explain fractals, but I just found the phrasing of that title funny. And then we've got Attempting a modern astrophysics problem using Kepler's original laws of planetary motion, which sounds a lot like just doing a modern astrophysics problem, because Kepler's laws are still highly relevant to astrophysics. They're not as outdated as this title might suggest. Kepler's laws helped take Artemis around the Moon, though with plenty of other maths too. But now for my favorite idea. For context, despite most of my videos being about science, I once posted a re-baked meme with a stolen clip from the Veggie Tales TV show. And that randomly became the most viewed video on my channel by quite a large margin. A partner manager from YouTube, who was a real human, once did a review of my channel and suggested that I make more videos like Veggie Tales, since it did so well. And I thought that was an absurd suggestion, but now the YouTube gods have spoken once again and suggested I make the physics of VeggieTales. Videos like analyzing the elastic collision physics of Larry the Cucumber and Bob the Tomato. That Veggie Tales clip I posted is actually about the future of entertainment being randomly generated. So it fits in perfectly to the slopification theme. But I can't believe I never thought about combining my viewer's interests in these ways. So stay tuned, this might be the new direction of my channel. Subscribe and come back next week to watch me calculate the impact force and deformation of Bob the Tomato's spherical mass. I can't stop laughing at these. Why is this one so scary? And my goodness, what are they doing to Larry here? This is 17th century Hook's Law torture. It is frightening, but returning to seriousness for a minute, one of my main concerns with AI is the loss of critical thinking skills. These are already hard enough to develop as it is, in a world with so much misinformation. And it's been that way for a long time. But now, with outsourcing intelligence, it could mean that we don't have that skill anymore amongst others. I'm worried about how soon it might be that we get to a world where intelligence is a paid subscription. And that if that tool is taken away from us, we lack the skills to fend for ourselves. In the age of AI, purely informational channels probably will be overtaken by AI. But creators can still lean into being human, as credibility is something that can't be AI generated. We can lean into being messy and real, and this YouTube tool might encourage outsourcing creativity, but surely that is one thing we can also hold on to. Last year I made a video about the pseudoscience detection kit contained within Carl Sagan's book A Demon Haunted World. That contained a lot of tips for critical thinking when it comes to science content. That book is also often pointed out as being prophetic for predicting a Trump-like leader rising to power. But maybe it also was prophetic in the sense that it predicted the increasing need to develop the skills to tell between fact and fiction. I know I certainly don't get it right all of the time. And there's extra nuance here, because being AI generated does not make something incorrect. I think AI probably has a higher chance of being correct about something on average than a random human who's not an expert in that topic. Telling fact from fiction is not the same as spotting AI. AI fact checking tools can be accurate, just not all of the time. And when people say that generative AI is just a text predictor, predicting the next word in a sentence based on what it's read before, I actually think that's too much of a simplification. Yes, prediction like that is a core part of these models, but it's not the only thing going on. There is so much more going into it now. Don't make the mistake of thinking that you know how these systems work and that they won't change. I think it's a better idea to try and keep up with the science behind AI as it changes. That can help us understand the ways that it can be wrong and the kind of incentives it is likely to follow. Let me shout out a couple friends making useful content in this area. Mithuna from Looking Glass Universe is covering developments in AI from the perspective of someone who knows a lot of the deep physics. And Peter recently launched a new channel called SciencePetr, and he has a lot to say about how AI affects education, including physics education. YouTube as a platform might continue to look a bit different and be a bit less human, but there's still so much good here. I find making videos to be so much fun and I hope I can keep doing it for a long time, even if that means calculating the deformation of Larry the Cucumber using classical solid mechanics for the sake of views.
[15:15]If one of the risks of AI is outsourcing too much of our thinking, then it becomes more important than ever to make time to actually learn, to try things for yourself and wrestle with ideas. And that's why I like today's sponsor, Brilliant. Brilliant teaches math and science in a very different way. Instead of just watching lectures, you solve problems step-by-step and interact with the ideas as you go. Their courses are written by subject matter experts and cover topics in mathematics, coding, science, AI and logic. They have a course called How AI Works, which helps you gain intuition for what is really happening under the hood of AI models, including the role of mathematics. If you want to try Brilliant, you can learn for free for 30 days at brilliant.org/tibees. And Brilliant is offering my viewers 20% off an annual premium subscription, which gives you access to everything on Brilliant. You can scan the QR code on screen or check out the link in the description. Thanks Brilliant, and thanks to my Patreon supporters for making this video possible. A special shout out to today's Patreon cat of the day, Lily, submitted by Steve.



