[0:00]In February 2026, Block fired nearly half of its workforce in a single day. 4,000 people. The CEO Jack Dorsey said something interesting. He said within the next year, the majority of companies will reach the same conclusion. And he's probably right. Last year alone, over 100,000 tech workers were laid off. AI was cited as a primary driver in more than half the cases. Now, here's the part nobody talks about. Those laid-off workers, they were also customers. They were the people buying software, subscribing to services, paying for products. So if every company fires its workers and replaces them with AI, who's left to buy the stuff these companies are selling? A new paper from researchers at Penn and Boston University asked exactly that, and what they found is interesting. Every CEO can see this cliff coming, every single one. And yet, none of them can stop. Let's check out why.
[1:00]Let's start with something really simple. Imagine a company. It has workers, it pays them salaries, and those workers take their paycheck and spend some of it. They buy groceries, subscriptions, gadgets, services. Now, here's the key insight. Some of that spending comes back to the company itself. Or to companies just like it in the same industry. So workers aren't just a cost on the balance sheet. They are also customers on the revenue line. Now, this sounds obvious when you say it out loud. But most companies don't think this way. They look at workers as expenses, salaries to minimize, head count to trim. They don't look at their employees and think, hey, that's my demand. Now AI shows up. Agentic AI, coding agents, customer support bots, and suddenly the CFO is looking at the payroll spreadsheet thinking, I can cut half of this, replacing them with AI, save millions. From that one company's view, that is a great trade. But zoom out. What happens when every company does the same at the same time? And this is where it gets interesting. Say, there are 10 companies competing in the same market. Company A decide to fire a thousand workers and replace them with AI. Company A gets all the savings, all of it. Every dollar of salary they cut flows straight to their bottom line. But those thousand laid-off workers, they were spreading their spending across the whole market. A little at company A, a little at company B, a little at company C, and so on. So when they stop spending, the damage doesn't land on company A alone. It spreads across all the 10 companies. Company A only feels one tenth of the damage it just caused. The other nine tenths gets pushed onto its rivals. So from company A's desk, the math looks like this. A 100% of the savings come to me, and only 10% of the damage lands on me. And that's a no-brainer, fire them. But here's the catch. Company B is doing the same exact math, so is C, so is D. Everyone fires. Everyone loses. And now all that damage, spread across all those companies, adds up to a collapse in customer spending nobody planned for. Every company optimized locally, but the market as a whole drove itself off a cliff. If that pattern sounds familiar, it should. We have seen it before. This is the same shape as problems we solve in systems all the time. Think about overfishing. Every fisherman wants to catch more fish today. That's rational. But when everyone does it, the lake gets fished out and everyone loses tomorrow. Or close to home, think about a database that multiple microservices are hitting. Each service on its own benefits from running more queries to improve its response time. But if every service does that, the database falls over and nobody gets anything. It's the same structure. A shared resource getting drained by individually rational decisions. Engineers have a name for this. It's called the tragedy of the Commons. And the thing about tragedy of the Commons problem is, they don't fix themselves. The authors of this page actually prove this mathematically. They show that firing workers become what they call a dominant strategy. Meaning, no matter what every other company does, firing is always the best move for you. Even if every other CEO in the industry agreed to hold back, you'd still personally benefit from defecting. And that's the trap. And economists have a classic name for it. You might have heard of the Prisoner's Dilemma. The paper is basically proving that AI layoffs are a prisoner's dilemma. Imagine just two companies. Each one can either automate or not automate. If neither automates, both do fine. Workers keep spending, demand stays healthy, profits are steady. If both automate, both end up worse off than if they'd both held back. Demand collapses, profits drop. But here is the brutal part. If one automates and the other doesn't, the one that automates wins big. It gets the cost savings without bearing the full demand head. So every company thinks, I better automate first. Otherwise, I am the sucker who is still paying salaries while the competitor cuts costs. So both sides think this, both sides automate, and both sides end up worse off.
[5:18]And this isn't something a slack channel can fix. Even if every CEO got in a room and promised to hold back, each one still has a private incentive to cheat the moment they leave the room. And that's not a coordination problem. That's an incentives problem. And which is why the paper's most surprising result is about, which solution actually work? The authors walk through six proposed solutions. Five of them don't work. Let me quickly tell you why. The first one is universal basic income. Give every citizen a monthly check from the government. And this is the most popular idea in the AI discourse. And it does help workers. It puts a floor under how bad things can get. But it doesn't change the company's math. When a CFO looks at firing a worker, the savings are the same whether UBI exists or not. The damage to demand is also the same. So UBI is a banded on the symptom, not a fix on the incentive. The second one is taxing profits and giving the money to workers. Sounds fair, right? But same problem. If you tax every company's profit by 30%, you just shrink everyone's profit by 30%. The relative math of firing versus not firing is unchanged. The third one is worker equity. Give employees shares of the company, so they share in the AI gains. This helps a little because workers spend their dividends back into the market. But it only helps your own workers. It doesn't address the damage your layoffs do to every other company's customer base. The fourth one is company's negotiating with each other. Form a pact, agree to hold back on AI adoption, and we have already covered this. It's the prisoner's dilemma. No voluntary agreement is stable. The fifth one is retraining. Help laid-off workers quickly move into new jobs. And this one actually does help. If workers get re-employed fast enough at similar wages, they keep spending and the damage shrinks. But it has to be fast and complete, which historically, it never quite is. So what's the one thing that does work? There is exactly one solution that targets the actual math. It's called a Pigovian tax, named after Arthur Pigou, a British economist from the 1920s. The idea is simple. If your action causes damage to other people, you should pay the cost of that damage as a tax. Applied to AI, it works like this. Every time you replace a worker with AI, you pay a tax equal to the demand damage you're causing to the rest of the economy. Suddenly the CFO's match changes. A 100% of the savings still come to you, but now a 100% of the damage shows up as a tax bill. No more pushing nine tenths of it onto rivals. So when the cost is fully internalized, companies automate at the level that's actually good for everyone. Not too little, not too much. And here's the elegant part. The tax money can be used to fund retraining programs, which helps laid-off workers to get new jobs, which means demand recovers, which means the tax gets smaller over time. The tax is designed to eventually put itself out of business. Now, I'm not here to argue for or against this policy. That is a political question. What the paper shows is a mathematical one. Of all the proposed solutions, this is the only one that operates on the margin where the problem actually lives. And there is one more twist the paper throws in that will mess with your head. You'd probably assume that as the AI gets better, things improve, more productivity, more output, everyone wins. The paper shows the opposite. Better AI actually makes the trap worse. Here is why. When AI gets more capable, the savings from replacing a worker go up. Which means the incentive to fire goes up, which means more firings happen faster. But the demand damage doesn't shrink. If anything, it gets bigger because more workers are losing income. The authors call this the Red Queen effect. From Alice in Wonderland, where the Red Queen tells Alice, you have to run as fast as you can just to stay in the same place. Every company's running faster, investing more in AI, firing more workers, and collectively, they are not getting anywhere good. And this is probably the most counterintuitive part of the paper. The smarter AI gets, the worse the trap becomes. The whole argument hinges on whether laid-off workers get reabsorbed into good jobs fast enough. Historically, after every previous automation wave, they eventually did. Tractor workers become factory workers, factory workers became office workers. And if that pattern holds for AI, this trap might never fully materialize. Workers might just shift into new roles we haven't imagined yet, and the demand keeps flowing. The worry is that AI is different. It's moving faster than previous waves. It's hitting more job categories at once, especially entry-level roles. So the real question isn't whether the math in the paper is correct. The math is correct. The question is whether income replacement keeps up with displacement. And that's not a theoretical question, and none of us know the answer yet. And if this made you think, do hit subscribe. I cover these kinds of deep structural questions in tech every week. I'll see you in the next one.



