[0:44]Oh, okay. Yeah. Oh, okay. Yeah, it was, uh, this morning, so I'm an hour ago. Yeah. It was already cool. Don't worry about it.
[0:56]And I will realize. Do you know this time? Yeah, I know that as well. So I told her that I can't take too long. Oh, yeah. Yeah, yeah, yeah. Oh. That was actually good. Yeah. That's a pity, but okay. Yeah, I'd love to.
[1:10]Yes. Yes. I thought actually I can't really do that.
[1:17]I will get sure.
[2:43]Welcome.
[2:46]I'm Justin O'Brien from the Friedich Ebert Stiftung's Competence Center on the future of work. We're based in Brussels and we're thrilled to host this panel this morning on the future of work and generative AI or in the age of generative AI, a social democratic agenda. And when I say agenda, I mean with a question mark in the end, because I think this is exactly what we're going to discuss here. It's not so clear what the agenda is and I think this is, uh, rather urgent. I know this is a bit early in the day, it's a long day, but uh, it's not early when we talk about discussing AI and the world of work. We already see a whole range of effects. Last year, there were accumulated 500 billion euros of investment in AI and the projections are into the trillions globally until 2030. And we already in our work with unions uh see that the first effects in the world of work are happening as we speak. We, for instance, do a lot of work with, uh, unions also in the creative sectors and a lot of workers are outraged. Because their data is being taken from them, their work is being fed into these systems without their consent, without any compensation really. And so they're now finding themselves competing with these machines. And the projections are, this is of course early evidence, we're early in the, in the AI age, let's say, um, that lots of sectors, lots of white collar jobs are vulnerable to automation, uh, to deskilling, to downward pressions, pressures on the working conditions. And this ranges, of course, from translation to visual design, to gaming, to administrative jobs. So a quite a broad range of jobs are coming into the scope of uh the latest AI systems. And I don't really want to buy too much into the hype around super intelligence or like, uh, um, yeah, wholesale replacement or mass layoffs and a future of work without work really. Uh, but I do think we should take this seriously, because as I said, we see the effects. And there's lots of pronostications, lots of studies that predict uh a widening between capital and labor. And when I say widening, I mean a further widening, because we have already seen this over the past decades with digitalization, uh more broadly. So I think this is an absolutely crucial issue. And that I really look forward to discuss with the panelists today. Um, I also think that the sector is extremely concentrated, right? There's a few AI players that are dominant, they're mainly based in the US. And so that also plays into the inequality, the productivity, uh discussion as well. Because, you know, if all the economic growth comes from, uh, firms that are not based here, how will it create wealth here? How will it create quality jobs here? So that's another angle that's also something that we cannot, uh, ignore. The policy response so far has been limited. Uh, in the US, the AI industry, uh, has produced their own industrial policy papers and they talk about, uh, a universal basic income, uh, startup subsidies, everybody should become an entrepreneur. I don't think that's necessarily the answer. And in the EU, the main response has been the artificial, Artificial Intelligence Act. The uh law that tries to set a few conditions around AI. At the same time, that law is already being rolled back as we speak. There's discussions in Brussels on that. And so against that background, I'm really interested to discuss with our panelists, what Social Democrats should be doing about this. What should we do about AI and labor? There's lots of workers who are concerned, where are we in this debate? What are we going to do? And, uh, you don't have to worry, you don't have to listen to me for the next 40 minutes, because we have a great set of panelists. Uh, first, we're joined by Emma Lopez, the Deputy Spokesperson, spokesperson of the PSOE. If I pronounce that correctly. Uh the Spanish Socialist and Workers Party.
[6:47]She has lots of expertise around economic issues, innovation, a long track record, also really interested in the digital transformation. So really happy that you could join us for this panel, Emma. Uh, second, we have with us uh Brando Benifei. Uh, he accepted to join last minute, so special thanks uh for joining us on short notice. He is the expert on the Artificial Intelligence Act. Has been negotiating that for the past years in the European Parliament. He's also the chair of the delegation of the European Parliament to the relations to the US. So that's also pretty relevant in this discussion. Uh and he's of course, uh active in the Trade Committee, the Inta Committee in the European Parliament. So, uh, extremely relevant portfolio. Then last but not least, we have with us, uh, Christian Kellemann. He's a senior researcher at the German Research Center for AI, which I understood is the largest nonprofit organization that is, uh, developing AI. Something I didn't know. So, extremely relevant. He has a long track record on the technical side of AI. But also doing a lot of uh policy work. So really thrilled, um Christian, that you could make it as well for this panel. Again, also relatively last minute. Given the, given the Lufthansa strike that we support. Um, maybe let's start Christian with you. Because you have, um, a very deep technical expertise on this topic. I sketched already some of the, uh, early impacts that we see from AI. Can you complete a little bit the picture? So when we talk about generative, when we talk about AI, what broadly are the effects for workers for labor markets? Yeah, first of all, thanks a lot for the invitation. It's great to be here. I think it's a very important moment to discuss this topic on a global scale. Um, yeah, what are the effects? I mean, we have since at least ten years, we have all kinds of studies, we have a huge flood of studies coming in, especially from the US on labor market effects of AI. And recently with generative AI, I think there's two directions. One direction is articulated by, by David Otter recently, if you don't know this, this, this person, he's a researcher from MIT, they actually all come from MIT in the US. He said, uh, AI might lead to a rebirth of the middle class. Why? Because, um, lower skilled jobs can be augmented by GenAI. And then they can, people, workers can like execute jobs, tasks which are actually for higher qualified. So there will be a rebirth of the middle class by up-up-lifting this kind of, these, these lower skilled jobs. And actually I just was in New York, uh, two weeks ago and when you look at, um, commercials from ChatGPT, I was standing at the bus station and there was a commercial from ChatGPT, um, like a moving image. And it showed an older person who was pretty, he was pretty desperate about his tractor which was broken down. And there was a younger guy coming, helping the older person to repair his tractor by looking at ChatGPT. So this is pretty much this idea of David Otter rebirth of the middle class. And then on the other hand, recently, um, the colleague of, of, um David Otter said, um, Darren Ager Mogel, is his name, you probably heard him. He said productivity effects will be pretty low, um, on total, on a macroeconomic scale. Um, so workers won't profit really when it comes to income because what Otter describes as a rebirth of the middle class, he describes as a cannibalization of lower skilled work. So there will be more people who can actually supply this type of work, yeah? So, um, in a way this is contrasting and quite very recently those two guys brought out a paper together which was labeled pro, how to build pro-worker AI. So, um, but this shows pretty much the debate what is going, which is going on right now in the field. Um, it's all, it's, I mean, it comes down at the end of the day, that microeconomic observations, so observations on the company level are extrapolated to the macroeconomic level. But this just doesn't fly, so it doesn't do the trick and we get very insecure like data to see what will, what will this change of world of work look like. Um, so therefore, um, first of all, it's still necessary to look on the company level, but, um, second, I think both of the quoted authors are having a real important point. Of course, there is this chance to up-scale and to upgrade work and I think this must be, um, one major focus point of progressive actors, yeah? On the other hand, um, right now what we observe, and I'm a lot going, I'm a lot in German companies doing educational stuff and so on. Um, I mean, there's a huge level of insecurity amongst the workers on everything regarding digitally divided, regarding, um, deskilling, regarding, um, substitution of jobs, regarding, of course, um, unemployment or becoming unemployed, and so on. It's all there. So, um, yeah, there's, it's, it's a huge, um, field for progressive, for social democratic actors, for trade unions to, um, fill this, fill this gap by creating the relevant policies. And on the other hand, for us as researchers, it's, um, also very important to get much better data which make, which make the journey from micro to macro in a more sustainable and a more, uh, yeah, relevant way.
[12:42]Thanks a lot, uh, Christian. So you sketched the two sides of AI, that there's maybe a high road that we can take. So we can create quality jobs, we can create good work. On the other hand, that's not really something we see right now, at least that's my estimate. Uh, so that brings us then to the question to uh the two political, uh representatives on this panel. Why are we not tackling this? What should we be doing? How should we take, what should we do to take that high road? How do we get to those quality jobs? And why is this not really a bigger part of debates also within social democracy? I don't know who wants to start first. Um, Brando, first, please, and then afterwards Emma. So thank you very much for, uh, this opportunity and thanks to Friedrich Eben Stiftung program Future of work for a long-standing effort to delve into the issues on, on this topic.
[13:40]And it has been very useful, your work also for my work negotiating the AI Act and fighting together with a, a great comrade that is not here now, but I know will be here in these days, Nicola Schmidt, former Commissioner for for social policies and now, uh, uh, the President of the, of the FFF. Um, that has been advocating for exactly a directive on, uh, AI at work, because I want to underline that we had, uh, a fight that I, I think broadly we won on having a progressive, uh, legislation on AI through the AI Act. We have been a bit lucky to be frank. There were good coincidences and I don't say that to flatter our hosts, but it was very good that we had to, to negotiate the law with the, the Spanish government representing the Council at the time, because we were lucky that the counterparts of the European Parliament to finalize the legislation was Sanchez government and it was just a coincidence. Sometimes in politics you need to have the good conditions in the right time. So in the end, they, the AI Act already, uh, has some important safeguards for, uh, AI at work. For example, this was a big fight, we have done together with the unions. We have put the workplace as a high risk area for the use of AI, which means that AI systems that enter the, uh, workplace and interact with workers, for example, also for hiring, they need to be, based on the AI Act rules in this case still not applied, but soon will be applied, will need to be checked on certain important characteristics. For example, the data used to train to avoid discrimination and, uh, risks for, for, for workers in this case. But also we have put a clear indications for the need to be transparent about the use of AI in workplace. So there must be, uh, the use of AI systems in workplace must be declared, it must be made transparent. And also we clarified that any further protection at national level is not to be considered in violation of the AI Act. AI Act is a floor, but we can have further protections, it's not a roof. This is very, very important and to be honest, we had to fight a lot with some business organizations that wanted to say, that's it. We don't get any further protections either by unions negotiations or by national legislation. But, as I said at the beginning of my intervention, we tried to get further with a directive on AI at work. Unfortunately, uh, uh, this is very difficult now to advance, because we have a political situation that is, uh, not good for, for this. In fact, now this initiative is not, is not going further. Why we needed that? Because the directive would go into the areas that for legal reasons, the AI Act couldn't tackle. For example, mandatory security and, and, and training, uh, requirements, involvement of social partners at a higher level, uh, and, uh, further expansion of rights for workers in the line of, many of you know it for sure, the platform workers directive, which has been an important initiative, but limited to the platform workers. So the algorithmic management has been tackled, but only for a part of the workers, not all for all the work, the work force. So to conclude my intervention, we have been doing some important endeavors. Some are lacking and are difficult to advance at European level, so my answer is, we need to do more at national level in this moment. We can use European money, if I look at the European perspective obviously, uh, we can use European money, European budgets to advance projects on, on the field, but we need to do more with national legislation. Because the European level, in my opinion, in this moment, is unable to make further advancements on the legislation. And we need to be very clear, linking to what, what was the previous intervention talking about, certain decisions are not European competence. When we deal with the taxation, uh, uh, uh, labor legislation, welfare decisions, how we redistribute the added value generated by, by, by AI. These are issues where the EU legislation, unfortunately, has a lot of limits. I am a European federalist, I would like to have more federated powers for Europe on this, but it's not like that. So we need to do more at the level of member states, we need to coordinate as socialists and and progressives on these topics and bring this, this debate obviously also at global level, because and I conclude the taxation, the fiscal issue I think is extremely relevant and it has obviously also transnational implications. Thank you.
[18:55]Uh, thank you very much, Brando for that riveting talk and update on the AI Act. One quick question on that. Do you consider that the AI Act protections for workers are safe, are they being down negotiated? I said that's not a thing. Yeah, that's a good point because we had the Omnibus, uh, revision. In fact, uh, I can say, because I'm also rapporteur for that, that the, these protections are not being touched. Uh, and they are protected, I'm, I'm convinced that there will be no problem on, on that specific area. However, I have to say that we need, it was mentioned earlier, we need to protect the transparency provisions regarding the generative AI. Because generative AI can be used also, uh, to, uh, uh, in a malicious way that can damage workers and I think that it's very important that we build on transparency and also we keep our prohibitions in place where needed. For example, I want to underline that we have prohibited thanks to the AI Act, and this is already, this is already in place, it's already applied, emotional recognition in workplaces, which I think is not something in the interest of workers and we don't want that in Europe. All right, very clear. Then I would like to move on to, uh, Emma. Emma, Brando already mentioned, uh, the good graces that we had at European level to have Pedro Sanchez in place when the negotiations on the AI Act took place. Of course, Spain is one of the few big progressive countries, social democratic countries when it comes to Europe. Um, yeah, how do you look at these topics and what do you think should we be doing? Or are we already doing perhaps, perhaps I'm too negative. Thank you so much, Justin. Thank you so much to FES for organizing this great event, this great and for counting on me. Thank you so much everyone for coming and bienvenidos a Barcelona. Welcome to Barcelona to this GPM event. Well, in my perspective, from the progressive agenda, we have to tackle many things. But the first of all, probably is not falling in the false dilemma between regulation and advancing in technology. Because in Spain, we have a clear vision of that, that probably in the rest of the progressive world, we should tackle that point. There's no inconvenient in protecting our people, in protecting workers and at the same time investing in our own technology. So that's something that we should have in our minds, because probably we will find many people with interest saying, no, no, no. This all regulation that you are doing is just stopping us for being like the big players. It's not at all. Protecting our workers, protecting civil rights, uh, being able to admit that if we are not in the far west, right now, in the real world, why should we have this far west thing in the digital world? That's one of the things we should have in mind when we talk about regulation and just Brando has a great vision about it, as he has worked at the AI Act. But also from a progressive perspective, we should have in mind, what do we do with redistribution? We are more or less aware that this is going to be great, this is right now, this is happening. And if we implement it, there are some numbers from the OECD that says that if we implement it now in 2035, our GDP will increase in 8%. That's a lot. So the progressive question would be, whom is the one who's going to make profit out of that 8% of GDP? That's the real question. Because we are living right now in a world full of inequalities, so AI can be another way of inequality for this world. So probably both are the two main questions. How to protect the people and who is going to make the money out of this technology? Otherwise, there are some risks and probably the European Union has focused very well on that, but both those questions are the ones that maybe we are not answering right now and it's super important that we face that. There are other debates regarding, I don't know, for international, all the intellectual property rights, something that we need to tackle. There are some more dilemmas about, of course, education, all the biases, the few women that are right now in this technology, not only working on that, but also, but all the biases that we have. So that's also an inequality problem that we need to tackle from our progressive perspective. And I want you to, to just do a quick hint and I will finish that a great opportunity that opens with AI is putting together the administration and the citizens. All the huge gap between the administration, it is too far, it is too difficult. We don't speak probably the language of our citizens. And that's something that the far right uses against us with this AI technology, when you are able to talk in your natural speaking, your natural language to your own administration, in order to pay taxes, in order to have licenses. All these difficult things is going to get much easier if we are able to use it in our own good. So couple of questions and one good thing that we could use it, and thank you again. Thank you so much.
[24:49]Thanks so much, Emma, you mentioned a lot. Um, I heard, uh, a big focus on inequality, right? This, this idea that super profits, the profits of AI are concentrated among a few firms, which indeed, I think, is empirically happening, so that is indeed definitely a big issue. I heard you talk about tax, I also heard Brando saying that taxation at European level was a bit of a nightmare, and I can confirm that, worked on commission on that, didn't get anywhere. Um, but that's, those are actually really important questions, also on intellectual property, and I would like to bring that back to Christian. Because, um, how do you look at those questions? I mean, in the Parliament in 2017, I remember, the European Parliament, there was this debate about, yes, we should tax the robots, we should tax AI. Uh, that's one way to sort of like rebalance, uh, you know, increase inequalities. In the US there's big talks of like having this big wealth funds that sort of like take those AI profits and redistribute them to workers and affected sectors. Um, so I just want to see from your perspective, do you see that any of these potential proposals, uh, are, are something that can be implemented? What, what would be the priority? Well, probably I'm not here to speak about implementation, but, um, more about maybe the validity of these, of these approaches or how relevant they are to tackle the potential, yeah, problems and risks with, with AI in the work field. I would say, uh, what you mentioned Emma, that, um, protection, it's a big issue. But how do you protect, yeah? How do you protect work? So this is our old, or is it an old topic. You protect work which is there, maybe you are also protecting new work to, to rise, so to say. So where is the fine line that you don't cut off innovation or dynamic processes which are necessary on a company level to renew themselves. It's not easy to define, but yes, we need, I guess, AI needs, AI on the company level needs protection beyond skills Sunday talks, yeah? So we need to upskill, we need to have educational programs and so on. Um, there needs to be a lot more, I guess, and this needs, this goes back again to, uh, another, to, to the macroeconomics I was already referring to. Because in the scenarios, um, which are, which we can observe, most scenarios, three out of four to be honest, um, most scenarios show that, um, labor is losing and capital is winning. So how can you change this proportion? How can you actually make labor also win? How can you make pro-worker AI? And the core approach is, and this is some kind of consensus in research, um, you have to create new jobs, new tasks. So when we over, when you overprotect old jobs, you probably will not have these new tasks. So, of course, it is very difficult for progressive actors to define how what is the approach to create these new tasks. And it's hard to see what is coming up. It's always hard to see in what is coming up in the future and we have a lot of new jobs now to nowadays which we did, we didn't know of after the second World War. Therefore, um, somehow we need to have some kind of openness, but this is also probably part of the reason why we're sitting here discussing what and what is the social democratic or progressive answer to to this dynamic, yeah?
[28:30]The other thing is yeah, distribution policies. Where do you start? Do you start with reducing, um, taxation on labor and raising taxation of capital? First of all, it's always very tricky on labor, you could reduce taxes obviously on a national level. It's a national task. How do you compensate? Um, we know that capital is more, yeah, flexible to change locations than labor, for example. So, another tricky question, um, but still in this calculation, and it's actually pretty much a Marxist calculation, in this calculation of constant and variable capital, so labor and capital, when it comes to value creation, automation will automatically lead to a reduction of the labor share.
[29:16]And this is lying behind all of this topic of, um, yeah, this, um, new polarization of AI when it comes to the functional income distribution meaning, uh, the distribution between capital and labor which looks pretty grim for labor. Therefore, yeah, it should be some form of taxation, but it's up to you maybe to, to develop this. Um, maybe so far first. Yeah, thanks a lot Christian for, for fleshing that out. That actually brings me to my final question to Brando and Emma, and then I would like to open up the floor for some questions. Christian sketched that things look pretty grim, right for labor. Brando, you sketched that it's very difficult to change that on the, you know, when you talk about taxation, for instance, at EU level, that's very difficult. If you do it at national level, then, you know, companies will just relocate to another country, that's, this is all quite obvious. And then we talk about the law, which is very important and I agree with that. At the same time, it doesn't feel that it's at the level of like the anger of workers and their concerns and anxieties. And for instance, if you look at the US, we see that Bernie Sanders, AOC, what they talk about is something different, right? They talk about a moratorium on data centers until things improve. They have concerns about working families, about surveillance on the workplace, uh energy concerns, et cetera, land use concerns. And they say, no, we want a moratorium. I'm not saying I agree, but I would be very interested to see what you think about policies that go in that direction, because that's not something I really see in Europe at the moment. Um, how would you respond to that? And then for the audience, if you have any questions, please think about it and I will get back to that in a few minutes. Thank you. Well, thank you.
[31:16]Thank you so much. Um, a question from a lady in the front here.
[31:24]And if you could be very brief, because I was told by the organizers that they will cut, ruthlessly cut us off in five minutes, so. Okay. My name is Mahab Abdel Nasser, I'm a member of the parliament in Egypt and actually vice chair of the Egyptian Social Democratic Party. Um, very brief. I don't think it's just the entry level jobs. Now, we are talking about lawyers, about, uh, accountants and some other middle class jobs and and this is a big problem. The second thing about the act, we are talking now about having an AI act in in in the parliament in Egypt, but we don't have any leverage on the companies. So, any act will not be implemented because we can't do anything to the big players who are doing the, the generative AI models in the globe. Thank you. Um, for the panelists, let's, let's go through the questions. There's I think a bundle of questions about young workers and how they are affected by AI and what to do about that. One very specific question on social permitting, uh for data centers. And, uh, and sort of two questions about more the global dimension of this. So, uh, how can they, how can sort of countries, uh, that are outside, let's say, the imperial core of, for instance, the US, how can you influence those technologies? How can they also ensure that it, uh, yeah, that it doesn't downgrade working conditions, et cetera. Um, I would start with Christian, we go, we go just one by one. And if you can also include your final statement if you have any and then we'll wrap up basically. So Christian, please. Okay, four minutes to go. Um, Judith, yeah, I mean, entry level jobs. I mean, we have, we have a big clash here, I think. On the one hand, every company and every LLM needs critical assessment competencies. How do you acquire critical assessment competencies when you get a, when you get a result from, from your generative AI? So you need to build up these competencies from scratch, otherwise you would not be able to, to have, to develop this competency. So this pyramid of la, pyramid, pyramidical logic of of competencies and and new entry level jobs which are apparently now to be, yeah, not so easy to, to get anymore. This kind of, yeah, doesn't really make sense and also I would like to question really the empirical findings we have right now on entry level jobs. Because it was one big study from Stanford and it was only one big study. So we don't really have a lot of studies on how this, how this can is really like addressed, has to be addressed to AI and not to economic circumstances.
[34:52]I think we should dig more into that, but yes, there is a risk and there's a risk at all levels of jobs, but at the same time, um, I would, I would argue that this risk is also a big chance and if you, if you integrate AI skills, um, early on in vocational education, for example, um, which is not done yet in a sufficient way, this would kind of make a change as well, because these people are, these workers are actually, um, like, yeah, they are really important.
[35:48]So I don't have time for a statement, I guess. That's okay. We'll continue the debate. This is the first panel, 45 minutes. We have two days. Emma, can you, can you follow up, perhaps also specifically on the data center permitting question, which I think is is a political question also. Yeah, in Spain we, we've debated a lot about it. And for us, it's super important that data centers comes with AI factories, with supercomputers, with quatic technology, so we can use our people, our talent, our knowledge, not only our energy and our prices for energy. That's probably the key question regarding that property. I think it's super important, it's one of the debates we need to face, because once we start with data property, then we can go to taxation and that taxation is probably something that we are right now not facing and it's probably kind of the future of taxation. And third of all, super, super quick education for that young people that need to be designed just with the market. So we can fix that and avoid the gap between the ones that are not into the market, so trade unions suppose not to defend them, but we take them on to account. Thank you.
[37:21]Thank you so much, Emma.
[37:26]Brando, please.
[37:37]Yes, maybe also the global questions.
[37:44]All right, thank you.
[37:52]Well, I think we will continue this discussion in these days because in fact, we are here to discuss the global cooperation and I think that this is one topic where we need, uh, democratic powers and countries to work together, different actors.
[38:09]I was myself, uh, at the India, uh, AI Summit, uh, global AI Summit, there, uh, where the voice of the global majority, we should not call it, uh, the global south, but the global majority, which it is, was very strong and we have different actors, both in the most industrialized countries, but also in, uh, uh, the developing ones that can build alliances, uh, to, uh, uh, ring fence some basic principles around the use of, uh, resources, so the, the environmental impacts, but also, uh, the way we deal with democratic risks. I give you one example, the spreading, the mass spreading of deep fakes without any transparency rules. There were some processes ongoing at UN level and at G7 level. So three different levels, we can say but they are struggling because of resistances of some important powers, first of all, the US in this case. So, I, I think, uh, uh, it's important that we build, uh, alliances on objectives of, uh, uh, reducing, sorry, the increasing, sorry, the democratic control over, uh, this, these technologies. And finally, I want to say on the data that was mentioned as an important topic. I think that, uh, control over data is one very important element for the struggle we are talking about. So we should refrain from accepting, this is a debate is happening now in Europe to get back, to backtrack, uh, Justin was, was there when we had recently a discussion in Parliament about this topic, about the data Omnibus. So we are talking about reforming our data legislation, which in my view needs to be clearer in terms of having an easier order, uh, but we do not, we should not accept, uh, uh, the deregulation on, uh, the protection of data. Instead, we need more democratization of data, because Europe on this topic has been able to create a precedent that is important also for the rest of the world. I think of many friends and comrades in South America, in Southeast Asia, with which we have been discussing about this. So Europe should not go back but should maintain a clear direction on this topic. Thanks so much, Brando. Thanks so much to the panelists. Let's give them a big round of applause.



