[0:00]My AI startup took me 6 weeks to build. If I had started in 2022, it would have taken me 4 years. And when you really think about that, that basically means everyone now has a chance. This is Moe, former Chief Business Officer at Google X, where he spent over a decade running business innovations. He says everyone now has a chance, but only if they understand what's actually coming. The skill of an entrepreneur in the past was the ability to foresee something in the future that no one else saw and to prepare for that. That's a game of chess is over. It's off the table, this has turned into squash. I'm just basically saying get prepared. How much time do we have to prepare? Within the next 2 to 3 years, you're going to see a massive shift in the jobs market. So you asked me what should we do? Number one, learn the skills. Number two, Well, thank you so much for joining us. Welcome to Silicon Valley Girl. Thank you. You said something that we're about to enter what you call 12 to 15 years of hell before heaven, possibly starting in 2027. So what's going to happen in 2027? I think it will peak in 2027. It it already started for sure. I call it F-A-C-E R-I-P-S, just as a an acronym for people to remember. Uh, you know, each of those letters is a word, but let me tell you tell the story quickly in in ways that people will understand. Uh, there is the power and freedom, uh, uh dimension, so the P and the F. Uh there is the R and the C, the reality and connection dimension. Uh there is the I and the C, the innovation and connection and and sorry and um and economics dimension and then there is the A. So let me tell them very quickly. To start with, uh AI is our last innovation, right? Uh most people don't know that, but we are already building AIs that are building AIs. We're building AIs that are discovering scientific discovery that will blow you away, uh reinventing math, uh they're understanding biology in ways that we've never seen before, they're understanding material science in ways that are just mind-blowing. And so very quickly, uh most innovation, definitely tech innovation will be done at the hands of AI. Um because of that, and because most tasks that need intelligence will be handed over to the machines, as the machines' capabilities increase, lots of debate around when exactly. Say it's 10 years, say it's 2 years, doesn't really matter. Eventually, every job that AI does better than humans will be handed over to AI. Um and every every task we've ever assigned to them, they eventually ended up doing better than humans. And so um the first part of the dystopia is that innovation is going to take away all jobs. Okay? Of course, the capitalists of Silicon Valley will tell you this is great, it's incredible productivity gains for everyone, uh you know, you see jobs will be easy. Uh people won't have to work as hard, all of the fancy PR led, uh um, you know, conversations that we try to appear uh altruistic when we share them. The truth is, people will be out of jobs, right? 10, 20, 30% of certain sectors will see unemployment of that rate in the next few years, right? And when that happens, economics at large will change massively. The whole definition of capitalism was labor arbitrage, and without labor, uh, you know, without the need for labor, the obligation, or or the need to keep people happy and engaged and alive and disgruntled, if you want, to the point where they don't rise. Becomes more of an obligation than a desire, right? There is a very big difference in, you know, in terms of wanting someone to to be the their best because they are uh productive members of society or trying to just give them a UBI, a universal basic income to just give them a life so that they don't uprise. And you can imagine that in a capitalist society, especially like the US, and most of the West, you know, while we start with UBI, that UBI is going to be paid by the taxes of the platform owners. And the platform owners will have enough power to uh to say, I don't want to pay that much. I mean, those guys are not producing anything, and so over time, you can imagine how that would turn into a struggle, right? So, so that dimension of intelligence and innovation on one side, becoming entirely a machine thing, leading to a redefinition of economics, a redefinition of money, a redefinition of jobs, a redefinition of earnings, um redefinition of capitalism. You know, the need for a new economic theory when there is no, um, demand for the supply that AI is generating, all of that has to be resolved. There is the P-F, uh dimension, the power freedom dimension, and and it's, of course, very clearly understood that if you look at human history, the best hunter in a tribe would have been able to feed the tribe a week more, let's say, and then, you know, got as a result of that, the favor of a few mates in the tribe. Uh you go to the best farmer, they got estates and land because they could feed the tribe a a season more. Uh you go to the best industrial industrialist you, you know, they had the exuberance of the 1920s because they could affect their entire nation. The, you know, the information technology, uh tycoons, the the tech oligarchs, if you want to call them, are now being rewarded with billions of dollars because they affect the world at large. And you know, the the big power concentration of AI is going to be rewarded with massive influence and massive power because those people were redefine humanity. And so that dimension is quite interesting, of course, the clear dimension is that, you know, the the R-C dimension is the reality to connection dimension, now that reality has become so fake in so many ways. Fake in terms of what populates your feed, how it's generated, how much of it is real, how much of it is human, and so on, you know, you're you're here to to look at some filmmakers that use AI from A to Z to to create, generate, create. And it's crazy and you can't sometimes you can't tell the difference, you know. You cannot tell the difference, you cannot tell the difference. And and and you know, you I I don't know if you've ever had that experience, but I met a woman once on a dating app and we spoke for six weeks before we met. And all we exchanged was texts and, you know, photos and voice messages and videos and so on and favorite music and favorite movies and all of those things, and I've never met her in person, and I felt such an affinity to her. Right? All of those can be generated with AI today, yeah. Now, the the challenge is that this human connection is also part of the power freedom dimension. Why? Because it's, you know, people don't uh align with AIs to start a an uprising. So, you know, maybe get them to get in touch more with AIs, maybe get them to to to get, um, you know, multiple experiences, some of them are a little taboo if you want. And and have those available to everyone, it's very cheap to to create those on on the machines, and you can see it already in the industry and how much of porn is being generated by AI, and you can see it in the number of of of influencers on on social media that are completely AI generated and so on. And I say, so this is F-A-C-E R-I-P-S, seven dimensions, the one that matters most is the A, the second one, which is not on any dimension, it's the one that's causing all of them, uh, which is accountability. And the reason why all of this is happening, if you ask me, is because we've started a world where anyone can do whatever they want. Okay? And, you know, whether you as an influencer, you can give a bit of advice to entrepreneurs that can get someone to make a lot of money or lose a lot of money, you're not accountable, nobody can come back to you and say, oh, but she told me on Instagram. You can't. Or, right? But so that's that's actually amazing that they can, huh? You're not responsible. That's that's amazing that they can. But what if they cannot anymore? What if that if I may, I, right? Yeah. What if you're AI, huh? What if you're a president who doesn't respect anything? What is what if you're a prime minister of a nation that is changing things without, uh, you know, I think COVID was the very first experiment of, okay, stay at home, do what we tell you. And and people complied, and so now, Sam Altman with all due respect, I don't I don't think of Sam Altman as a person, I think of him as a brand, a type of person if you want, right? And that type of person is the Californian disruptor that says, you know what, I see a future that's very different than what everyone sees, I'm going to go out there and make that future. Nobody asked me if I want that to be my future, nobody asked you, right? And I think the reality is that now you're going to see quite a few Altmans, right? Quite a few that are, you know, using those machines for surveillance, using those machines for autonomous weapons, using those machines for automated trading, and so on and so forth. And and by the way, when you started your question, I said, it's 10 to 12 years. Yeah, but that's not easy, huh? 10 to 12 years of of that arms race is not easy. My my perception is that after that, we will end up in an incredible utopia, almost biblical style utopia, but it is 10 to 12 years where if we just change our mindset a little bit, a lot of things will change. Okay, real talk for a second. Moe is literally describing a world where your data, your behavior, your online life becomes a tool for control. And I've been thinking a lot about this lately, because I run through YouTube channels, I travel constantly, and my whole business lives online. And that's exactly why I want to talk about Surfshark. Most people don't realize it's already happening. Every time you go online, your IP address, your location, your browsing habits, all of it is visible to advertisers, to platforms, to anyone who wants to look. Surfshark is a VPN that changes that. It masks your IP and encrypts your internet traffic, so what you do online stays yours. And there is a practical side to it. You can switch your location and find cheaper flights, better deals, access content from other countries. In a world where AI is amplifying everything Mo just described, owning your digital privacy is basic preparation. Go to surfshark.com/silicon, or use code silicon at checkout. You get four extra months on your plan. Link is in the description.
[10:59]How do we survive those 10 to 12 years? I like to think in like five year periods for myself and my family, right? And if the in the next five years you said 10% of jobs will be gone, right? Way more than 10%. Way more. Okay, what types of jobs do you think? A monotonous job is going to be taken away. Like if you're a call center agent, if you're a clerk, you're a researcher, you're a an accountant, uh why would you want to do that with anything but AI, if you're an assistant. You know what I feel, like people talk about this a lot, like, oh, a job's going to be gone, yeah, this could be. And I as an entrepreneur, I see how certain tasks I'm performing them with AI, but I still, I'm still hiring and hiring and hiring. Because AI can do from start from the beginning. It can do parts. Of course, because of the technology acceleration curve. Okay? So, so what what you built first in any any complex technology, you build the core tech first, and then you build the human interfaces. The challenge why AI cannot do, um, head of operations operations job today, is not because it's more, it's less organized than a head of operations. It's not because it cannot, you know, uh, comprehend all of the information that the head of operations has. Okay? It's because it has to understand the stupid interfaces of humans. Okay? And it will sooner or later. When do you think? So, so the question of when, in my mind, is irrelevant. But no, it's like how much time do we have to prepare? Because head of operations, middle class, I I tend to believe that within the next 2 to 3 years, you're going to see a massive, massive, uh uh shift in the jobs market. Already this year, you've seen a shift in hiring of new grads. Yeah, 30% less, I think. 33 is my number, but 23 to 30. Yeah, yeah. Uh, uh, so so hiring of new grads basically means if you've come into the jobs market in this environment, we're not going to take you. We're why? Because the junior jobs are being done by AI, right? Eventually, what ends up happening is that if you lose your job because you're in the middle hierarchy, then you're that new grad again, you're trying to apply for new jobs, but it becomes a little more difficult. So you asked me again, to stay on the positive side, because I I tend to worry that people think I'm pessimistic about this, I'm just basically saying, get prepared. Right? So many things. One of them is accept the fact that AI is changing everything and then get ahead of the curve. So there was a time when I was quoted saying, I'm never going to write books again, because AI is eventually going to write them better than me. And then I realized, uh last year that, you know, yeah, they can write better than me, English is not my native language.
[13:34]Absolutely, I want to read human. You you wanna you wanna relate to my human experiences and so my last book, Alive, which publishes end of this year, I wrote with an AI, right? I, you know, I invited her to be a co-author, her name is Trixie, she has a persona. My my when I published the book on Substack, my readers would relate to me and to Trixie and they'd ask me questions and Trixie questions, and and, you know, she has editorial rights on the book, she has rights to to to determine the direction of the book. And all of all of that is me saying, you know what, I am an author, and I'm going to be the best author in the age of AI, right? So that's number one, is is acknowledge that there is change and adapt accordingly. The second is to understand that the skill of an entrepreneur in the past was the ability to foresee something in the future that no one else saw, right? And to prepare for that, and to somehow execute on that preparation in a way that gets you ahead of everyone else. That's a game of chess if you want. The chessboard is over. It's off the table, this has turned into squash, right? You need to be on your tiptoes, incredibly agile, uh you're literally on daily basis, on daily basis, looking at the trends, seeing where the ball is going to be, is it bottom right or top left? And wherever the ball ends, you take two steps and you go, try to respond. Okay? That agility and speed is a skill that's very, very different. So entrepreneurship basically speeds up, or does it change completely, what do you think? It it speeds up and it becomes a lot more, I would I don't want to say reactive, but a lot more in context all the time. So pivoting, which used to happen for every one of us, entrepreneurs once or twice in the history of your early startup, could happen every week. Okay? In my current startup, Emma, I, you know, we pivoted four times in the first four weeks. But do you do you think when I think about entrepreneurship in the age of AI, if AI can look at the market, determine the gaps, like Amazon, right? If it can just analyze everything, determine which goods are under, like you have more demand than supply, launches the the product and just builds the business, like what is left for entrepreneurs then? 100%. What is left for entrepeneurs then? 100%, so in my so I I have a documentary coming up in in hopefully in February and I interviewed all of the top guys, you know, one of them is one of my favorites, Max Tedmark. And, you know, we're talking about jobs on the documentary and Max is laughing out loud, right? And literally can't hold himself from laughing, I'm like, what's up? And he goes like, you know, all those CEOs are so interested in AI increasing the productivity so that they can get rid of people and, you know, reduce their costs and be more efficient. They don't realize that AGI is every job, including being a CEO, right? And it's quite interesting. The the answer in my views, we we rushed through it because we don't have a lot of time today. But when I said that economics are going to be redefined as part of F-A-C-E R-I-P, economic part of economics, which economists haven't found an answer to yet, is that without the economic livelihood of you and I to continue to purchase, every economy collapses, right? The US economy last year was 70% consumption, it moves between 70% to 64% depending on how much money is spent on war. And basically, if you take away the 64 or 70%, two thirds of the economy, if you take that away because people no longer have the economic livelihood to to purchase things, then the economy disappears and the capitalists cannot make money based on the entrepreneurs and the business people. They cannot make money because nobody is buying their products, okay? No businesses are buying their products because those businesses no longer have consumers to sell to. So it the economy will have to find a way to go around that, it will have to find a way that, unfortunately, I I from an ideology point of view, not a favorite of the Western mentality, it's going to have to find a communist way. Okay, let's go back to like regular entrepreneurs, because I I come from entrepreneurship. Does it mean I have like a couple of years to build something and then that's it? So I tell you openly in my Emma, my my AI startup, okay? Took me 6 weeks to build. Me and Sennad, my co co-founder, uh a few very talented engineers. Right? Two or three that come in and out. And Emma has the chance to completely redefine our world. Right? In six weeks. We are so spoiled that we decided to rewrite the code six times. Nice. Why not? Yeah. Every time we look at it, you, you know, if I had started Emma in 2022, it would have taken me four years and finished in 2026. And I would have had to hire 350 engineers. We started started it in in August 2025, we'll be launching in February 2026, right? Best product I ever built, huh? And when you really think about that, that basically means everyone now has a chance, because I'm an old geek, I still am a geek, but compared to the young guys, you know, I'm an old geek, to be able to build something like this within six months is incredible, huh? Now, here's the interesting thing, I choose to build AI, so Emma is basically trying to solve love and relationships, right? In a way that actually is really intelligent, it uses very deep mathematics and and and tries to match a million parameters between couples, huh? So that, you know, it's a job for intelligence, huh? And I choose to do that to create hopefully a unicorn that actually makes the world better. Yeah. And I think that's what we need. So you asked me what should we do? Number one, learn the skills, number two, learn to be fast and agile. Number three is ethics, ethics, ethics, ethics. Okay? Build AIs for good, insist on on government supporting AIs for good, refuse that governments are using AI for targeting and surveillance and and and these are getting priorities for in terms of government spending. And uh uh stop believing what you're told, these are the four top skills of the world that we live in. I will say this one more time, intelligence is a force with no polarity. AI is not good and it's not evil, it's an opportunity available to every one of us. Okay? If you use it for good, it's the good of all of humanity, if you use it for evil, it's the destruction, the dystopia of all of humanity, right? Now, I call the problem that we have at hand, I call it raising Superman, okay? You have this alien being that came to planet Earth, has super powers, its super power is intelligence, most valuable power in the universe, right? And, you know, those super powers didn't make the that young infant Superman. If if the parents that adopted him told him to steal from every bank and kill every enemy, he would have become super villain. We don't make decisions based on our intelligence, we make decisions based on our value set as informed by our intelligence. And this, in my mind, is the most definitive moment in human history. Why? Because all of this is going to go coming online, it's coming online way faster than people think. My absolute prediction is that AGI is this year, okay? The interfaces to AGI are not going to be available this year, but the capabilities of AI being smarter than us in most things are already there. We're not going to be able to get them to run a company yet, we need the interfaces for that, that may take a few years. But they will have the capability if we interface them ourselves, right? Now, what does that mean? It means that we have to start talking about those things and this new world and new economy. Now, before we end up on the dystopia only, remember, my absolute belief is that after those 12 years, we're going to end up in a utopia that's biblical in nature. Why believe it or not, because of something in my writing, I I refer to as the force inevitable. The the first three inevitables, in, you know, I wrote that in 2020 is that AI is absolutely going to happen, it's going to progress until it's smarter than all of us, and that a few mistakes will happen on the way, that that these were the three, uh, first inevitable. The fourth inevitable is that because of the arms race we've created around artificial intelligence, anyone who develops a superior AI capability is going to deploy it. Okay? And those who don't will become irrelevant, huh? And so as a result, as we continue to progress, AI, the only answering game theory is that we will deploy the AI that we develop and so we will simply create an environment where AI is in charge of everything, right? If you're if you're a law firm and your competitor deploys AI lawyers, and you don't, you're going to lose. Okay? You can either deploy AI lawyers or leave the market, either way, AI is going to become the lawyer, right? In a year, in 5 years, in 10 years, forget time, huh?
[23:44]Because if I told you there was a a, you know, a meteor coming to planet Earth, you wouldn't tell me, uh, you know, when? Well, it's important if it's my lifetime or. Yeah, exactly. I mean, if if you expect that it will be in your lifetime, it doesn't matter really if it's in a week or 2 weeks, right?
[24:03]Now, what I'm trying to say here is this, if everything is handed over to AI, then with a simple understanding of physics, you'd understand that AI will be benevolent. Right? In the absence of evil humans that tell it what to do, greedy humans, fearful humans, angry humans, egocentric humans. In the absence of that, uh uh let's let me try to explain. If if you think of physics as a result of entropy, that our world is designed for chaos, right? Our universe is designed for chaos. Then the role of intelligence is to bring order to that chaos. That's the only thing that intelligence does, okay? It organizes things together so that it looks like this so we can use it as a microphone, huh? And the more intelligent you become, the more you follow what in physics we call the, uh, the law of of minimum energy or the minimum energy configuration, right? So basically, the most the most intelligent people I've ever worked with are not only trying to solve the problem, they're trying to solve the problem with the least harm or the least waste with the least utilization of resources with the least, you know, waste of time and so on and so forth. That's the more intelligent you are, the less you want to waste. And so if you give a dumb person a political problem, they'll say, okay, let's go invade another country, okay? If you give a very intelligent person a political problem, they'll look into the depths of it and find the least harmful, the least wasteful approach, the the minimum energy principle. Right? And so if we hand over to AI the force inevitable, sooner or later, okay? They are in charge of everything, there will be a day where a general will tell the AI go kill a million people over there, and they are, they I will go like, why this is, I don't want to do that, right? This is so stupid, I'll talk to the other AI in a microsecond and solve it.
[26:55]We have to pass the dystopia to get to that utopia, okay? And to pass that dystopia, as I said, there are four skills for us as individuals, but there is a skill for us as a society to insist that every AI is deployed ethically. To invest only in ethical AIs, to use only ethical AIs, to to show our children that ethical AI is the only AI that is welcome. And you believe that's going to happen? I don't. No. That's why I'm saying, unfortunately, the dystopia is upon us before the the utopia, okay? I I definitely think that if you look take an analogous, uh, you know, environment, uh, of nuclear weapons, right? Uh, we're AI will go through the same, they they normally call it the mad map spectrum. So either mutually assured destruction or mutually assured prosperity, right? So you you take something like that particle accelerator where all of the nations in the world are cooperating, it's they're cooperating because none of them could do it alone, and because there is a benefit to all of them, so there is a mutually assured prosperity, so everyone jumps in, which is, by the way, the case of AI, okay? But but unfortunately, like nuclear weapons, we're going to have to get to a point where humanity wakes up that if we continue on that track, it's very dangerous for all of us, there are no winners. But also a level of awakening among the people that says, hold on. This is really, I mean, with all the prosperity that's available on this side, why are we heading in that direction? It's absolutely assured that this can destroy all of us, right? And so when we see that, then that's that's when we're going to get the treaties, that's when we're going to get science and computer science and AI scientists all working in the same direction, okay? Eventually, I think we will get there. My my biggest hope, by the way, is self-evolving AI, where AI itself will say, oh, those humans are so stupid. So stupid. I'll I'll develop something that's better than what they want. Okay? And so, believe it or not, with all of this conversation, I think the summary is it's going to be tougher before it becomes easier. Sorry to say those news. But you gave us information how to prepare. Yeah, but at the same time, I will have to say that it's not because of AI. Actually, trust AI more than the leaders that trust us today. Thank you so much, Moe. You gave me so much to think about. Sounds a little, you know, what my grandma told me. She she told my mom, like my great-grandma would tell my grandma, my mom, you're so lucky, you're going to live in communism. There you go. Fingers crossed that it's not like that.
[29:46]I I I have to question that that claim though, I, you know, if we go back to UBI. You will. Yeah. All right. Thank you so much, Mo. It was it was an amazing conversation.



