Thumbnail for Live Case MBA Fundace: 7º Sentido Confecções by Fundace - Fundação para Pesquisa e Desenvolvimento da Administração, Contabilidade e Economia

Live Case MBA Fundace: 7º Sentido Confecções

Fundace - Fundação para Pesquisa e Desenvolvimento da Administração, Contabilidade e Economia

14m 29s2,799 words~14 min read
Auto-Generated

[0:00]Hello, everybody, and welcome to the AI chat show. My name is Peter, and I'm joined by my wonderful co-host as always, the amazing Dr. Sarah, and we are going to talk about AI, because that's what we do. And we are going to talk about it with a lovely guest today. Who is our lovely guest today, Sarah? We have the wonderful Dr. Kate and I'm very excited for today's episode. And she's also joined us before on the show, but we've got some new things to talk about, haven't we? Yeah, it's very exciting. So, Kate, for people that don't know you from the previous time, just to give them a little bit of a heads up, who are you and what do you do? So I'm Kate. I'm a researcher and lecturer in responsible AI. Um, and I just finished my PhD, looking at the implications of AI systems for human rights. I'm interested in how AI affects people and how we can make sure that AI is being used in a good way. And as Sarah said, you've been on the show before. You've talked about what you do, but something has changed very recently, and that is that you now have a book coming out. And that is why we wanted to get you on the show, because this is very exciting. Tell us about this book, Kate. So I've got a book called The Ethical Line, and it's coming out on the 10th of October. And it is a book about the ethics and human rights implications of AI. So I've written it, uh, with the aim of kind of unpacking what is a fair and ethical AI system. Because it's quite a hot topic, it's a bit of a buzzword, ethical AI. And it can mean different things to different people. So I wrote it to help people understand a bit more what it means and why it's so important. So it's coming out October the 10th. Where can we find it? Uh, so you can pre-order it at the moment on Amazon, Waterstones, all of the usual suspects. Um, and then once it's out, you'll be able to find it in those places and probably your local bookstore, too. So this is so exciting. And I just want to say a massive congratulations on writing a book. I can only imagine the amount of work and effort that goes into writing a book. And this is a fantastic achievement, and I'm really looking forward to reading it. Thank you. It was quite a lot of work, but it was really rewarding to do. It was really nice. So what kind of topics do you cover, then, in the book? Because it's all about ethical AI, but that's a very broad topic, isn't it? Yeah, exactly. So I've tried to tackle this in a kind of interesting way. So it's framed around specific ethical dilemmas. So things like if an AI makes a mistake, who is responsible? Or should AI be used in decisions that affect people's lives? And for each of those ethical dilemmas, I look at specific case studies of where this has happened. So I look at how AI has been used in things like criminal justice or employment. And I tried to really drill down into the kind of real world examples of ethical AI issues. And I'm assuming that the title The Ethical Line refers to where you draw the line. And how do you then answer that question? Do you suggest that everyone should draw a line at the same point, or is that for the reader to make their own decision? So that's a good question. Um, yeah, the title is very much about where you draw the line. And I wanted to write the book in a way that helps people consider where they would draw the line. Um, I think what's important is that people are having these conversations and thinking about it. Um, I don't necessarily provide definitive answers to all of the big questions. Because often there isn't a single answer, but I'm trying to kind of push people to consider their own perspectives on it. I think that's a really good way of going about it. Because when we're talking about AI, it's very easy to just say, hey, this is good, this is bad, this is what you should do, this is what you shouldn't do. But a lot of it comes down to a discussion, doesn't it? Yeah, exactly. And I think that's why it's so important that we have as many people as possible involved in these conversations. So it's not just technologists, it's not just ethicists, but it's really broad public engagement. Because AI is affecting everybody, we need everybody's voice to be heard. So as a lecturer, who do you tend to educate? Is it other students in particular, or is it a broader scope? Yeah, so I primarily work with undergraduate students who are studying computer science or related subjects. So I teach them about the kind of technical side of AI, but also the ethical side of AI. So I want them to develop ethical instincts, so that when they go out into the world and they're building AI systems, they're thinking about those ethical implications. So they kind of inherently consider them in the design phase. And I'm assuming that it is a very big responsibility to be teaching young people who are just starting out in their careers or about to embark on their careers, to be teaching them the ethical side of AI. Because they are the people who are going to be building the next generation of AI. Yeah, it is. And it's quite a challenge. Because these are people who love technology. That's why they're studying computer science. And I'm not trying to say don't use technology, don't build technology. But it's trying to help them understand that there's also a kind of societal aspect to it. And that these systems that they're building can have real implications for people. So it's getting them to think about, not just can we do this from a technical perspective, but should we do this from an ethical perspective? And how do you find they generally respond to that? Is it something that they find quite new, or something that they've generally been thinking about already? I think it's a bit of both. Some of them are really interested in it, and they've actually thought about it before. They've maybe heard about it, and they've got their own kind of perspectives. For others, it's a really new thing. Um, and for them it can be a bit challenging at times. Because they're so used to kind of technical problem solving, and this is a bit more fuzzy. It's not as clear cut, often there isn't a right answer. So it's getting them to engage with that fuzziness. I think that's probably the biggest challenge. Yeah, I can imagine. And I think that's a brilliant way to frame it with your students. And also, the general public with your book and the dilemmas that you're presenting them with. Because it's quite easy for people to say, I'm just going to draw the line here. But when you're faced with specific dilemmas, you know, it's quite easy to start saying, Oh, well, in this situation, I'd probably do this. Oh, but in that situation, maybe it's this. And it makes them really think, doesn't it? Yeah, exactly. And I think that's why the case study approach is really important. Because it's quite easy to kind of sit from an academic ivory tower and just speak about things in general. But once you start looking at the real world implications, where it's had a really negative impact on people's lives, that's when people really sit up and take notice. And you just said that it's important that more people are having the conversations. Are we having enough conversations right now, do you think? No. No, I think we need to have a lot more conversations. And I think that's why I'm really keen that the book is quite accessible. So I'm not a big fan of kind of academic jargon, I'm trying to avoid that. Because it's so important that as many people as possible are involved in this. It's not just a kind of small elite who are talking about it. And I do think it's still fairly elite at the moment. How do you make sure that it's not elite? How do you make sure that more people are getting involved? I think it's just about talking about it wherever you can. So I'm lucky that I'm able to go into schools and speak to school children about it. I'm able to talk on podcasts. And it's really about taking the message to wherever the people are. So it's not waiting for people to come to you, but going out and talking to them. And I think in the book you also speak about human rights as well. So is that from the perspective of how AI can violate human rights or how it can protect human rights? Yeah, so it's a bit of both. It's a big focus of the book. Um, and it does look at both. So AI has the potential to really advance human rights. And it's quite a powerful technology, and it could be really good for human rights. But it also has the ability to cause severe harms and have a negative impact on human rights. And often that's a bit more of the focus in the book. Because it's the bit that we need to be really careful about. So I look at how it could potentially violate people's rights to privacy. Or the right to a fair trial, for example, if AI is being used in criminal justice. Or even the right to education. And there are so many different human rights that are touched upon by AI. So it's getting us to think about how we can build human rights protective AI systems. I think it's quite interesting because a lot of the time when people think of the implications of AI, they think of the big Terminator type scenarios. But actually, as you just said, it's those more granular every day, potentially, you know, AI being used in criminal justice or things like that. That's where the actual damage can be done to human rights. Yeah, exactly. And I think that's why it's really important that we understand these implications now. Because it's not about the kind of sci-fi futures, it's about what's happening now. And it's about making sure that the future that we're heading towards is a good one. Because if we don't think about it now, it might be too late to act in the future. And there are already examples of that, for instance, with bias within AI. And that has had an impact, hasn't it? Yes, absolutely. So I talk about bias in the book, particularly in the context of things like employment. And there have been a lot of high profile examples of AI systems that have been built to filter job applications. And they've been found to be biased against certain groups. For example, Amazon built a system that was found to be biased against women. And these are real harms that affect people's life chances. And that's why it's so important that we think about these issues, and we take action to mitigate against them. And how do you then deal with bias within an AI system? Because obviously, AI is trained on data, and if that data is inherently biased, then the AI will be biased. Yeah, exactly. So I look at this in quite a lot of detail in the book. And I think the first thing is to really try and address the data. So looking at where the data comes from. Looking at whether the data is representative. But often the issue is that it's just a reflection of societal bias. So we're building AI systems based on data from a biased society. So that's part of it, but then it's also about a more holistic approach. So it's about who is on the teams that are building the AI. Are those teams diverse enough? Are they representative enough? And then it's about the kind of testing of the system. And it's not just testing it as a technical system, but testing it for its kind of social impact. So there are different things that can be done at different stages of the design and development. So it's a bit like a circular process, isn't it? Because the humans are creating the data that then affects the AI, and then the AI affects the humans. Yeah, exactly. It's a socio-technical system, so it's not just the technical bit. And it's also about, you know, do we trust an AI system to be making decisions about people's lives? And do we need to make sure that there's a human in the loop? So it's not just automating things for the sake of it. Because sometimes we don't need to automate things. And that's a brilliant way to put it, that we don't need to automate for the sake of it. I think that's a very clear message that needs to be communicated. Because that's where people start to get worried about AI. Because they think that it's going to replace their jobs or replace human contact. And in some instances, it really does need to have a human in the loop. So what would you say to people who are really worried about the impact of AI? I would say that their concerns are valid. And it is something that we need to think about. But I think what's important is that we don't just kind of bury our heads in the sand. We need to engage with the technology. We need to understand it. And we need to have these conversations so that we can shape the future that we want to be in. Because if we don't engage with it, then other people will shape the future for us. And that's not always the future that we want. And I think that's where your book and this conversation comes into its own. Because a lot of people are either scared of AI or they completely embrace it. But there's very little in between, isn't there? And I think what you're trying to do is encourage people to go, okay, let's just pause. Let's think about this, and let's have a discussion about it. Yeah, exactly. I think there is a tendency to kind of bifurcate into either utopia or dystopia. And the reality is that we're somewhere in the middle. And we need to have these conversations so that we can ensure that we're heading in the right direction. So if people want to find out more about your book, how can they find out more, aside from purchasing the book? Uh, yeah, so I am on social media, so you can find me on X, which used to be Twitter. Uh, and also on LinkedIn, and I will be posting about the book there. Um, and yeah, just kind of check it out and pre-order it. And I will also be doing some events around the book coming out as well. So check out my social media channels for that. And just to clarify, because people like to know these things, if they go into their local bookstore, assuming they have one, they can ask them to order it in. Yes, they can. Um, yeah, they can. And they can also pre-order it directly through the publishers as well, which is Hive. They're really good, they support local bookstores, so if you want to support your local bookstore, do it through Hive. Hive is H.I.V.E. Correct? Yes. Thank you very much, Kate, for coming on the show and talking about your book. It sounds absolutely fascinating. And I'm really looking forward to reading it. Thank you for having me. Thank you for joining us today, Kate, and also on behalf of myself and Peter. I just want to say a massive congratulations again on your book. And I'm really looking forward to getting my copy and reading it. And to everyone out there, thank you for listening to the AI chat show. You can find all of our episodes on your favorite podcast platform. And also, do give us a follow and rating because we love to hear from you. And we will see you next week. Bye.

Need another transcript?

Paste any YouTube URL to get a clean transcript in seconds.

Get a Transcript