Search
  • Kerry Mackereth

Jess Smith on Slowing Down and Teaching AI Ethics

In this episode, we chat to Jess Smith, a PhD student at the University of Colorado in Information Science and co-host of the Radical AI podcast who specialises in the intersections of artificial intelligence, machine learning and ethics. We discuss the tensions between Silicon Valley’s move fast and break stuff mantra and the slower pace of ethics work. She also tells us how we can be mindful users of technology and how we can develop computer science programs that foster a new generation of ethically-minded technologists.


Bio:


Jessie J. Smith (Jess, she/her) is a Research Intern for the FATE (Fairness, Accountability, Transparency, and Ethics in AI) team at Microsoft Research, and a PhD student at The University of Colorado Boulder researching machine learning and AI ethics with an emphasis on algorithmic fairness, transparency, and education. Her PhD research currently focuses on multistakeholder recommender systems, where ethics constraints like fairness and transparency are often in tension between stakeholders. Jess' work also focuses on double-sided education: teaching computer scientists about ethics, and educating users to improve their algorithmic literacy. More information can be found on her website.

Reading List:


1. Fairness and Abstraction in Sociotechnical Systems

2. Roles for Computing in Social Change

3. “Algorithms ruin everything”: #RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media

4. Fairness and Transparency in Recommendation: The Users' Perspective

5. Do Artefacts Have Politics?

6. Design Justice

7. Algorithms of Oppression

8. Atlas of AI

9. Race After Technology

10. Reclaiming Conversation


Transcript:


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

Today we’re talking to Jess Smith, a PhD student at the University of Colorado in Information Science and co-host of the Radical AI podcast who specialises in the intersections of artificial intelligence, machine learning and ethics. We discuss the tensions between Silicon Valley’s move fast and break stuff mantra and the slower pace of ethics work. She also tells us how we can be mindful users of technology and how we can develop computer science programs that foster a new generation of ethically-minded technologists. I hope you enjoy the show.


KERRY MACKERETH:

Thank you so much for being here. Just to kick us off, could you tell us a bit about who you are, what do you do, and what brings you to the topic of technology, ethics and education?


JESS SMITH:

I'm Jess, you can call me Jess, or Jessie. And I'm currently a PhD student at the University of Colorado in Boulder, USA. And I am getting my PhD in Information Science, which I don't expect anybody to know what that means. Because even information scientists often don't know what that means. So instead, I just say I am researching the intersection of artificial intelligence, machine learning and ethics. And I came into this space in a little bit of a windy path, as it seems most people in this community do. But I started off my career as a software engineering major in my undergraduate degree. And I was on my way to becoming a website developer, or just a general software developer, a full stack developer, I realised my work wasn't quite fulfilling me, it was fun, but it wasn't really making the impact that I wanted to make on the world. And it was actually a little bit serendipitous timing. At one semester, I was taking two classes at the same time, that just happened to ignite my passion for AI ethics and computer science ethics. And that was, I was taking in the morning, a data science class, it was my first data science class ever. And in the afternoon, I was taking the one required computer ethics class for my major. And it was such a crazy experience, in the morning learning how to write code to make machine learning algorithms that would do things like scrape the web. And then in the afternoon, going into a classroom and talking to the, what was basically a philosophy teacher and having him explained to us why doing things like scraping the web could be really unethical if you don't do it in the right ways. And so I felt like there was this dichotomy of knowledge that I was gaining in one day, where on the one side, the computer scientists were saying, make all the things, break all the stuff, do it all faster, and better and more efficiently. And then in the afternoon, they were saying don't do any of those things, this is bad, this is unethical. And so I kind of had this like, aha moment in my mind during that semester, where I realised that it was, there was a whole world out there that I was not privy to, because I had been trained as a computer scientist, that was being told to just make things and I was only asking, how, how do I make things but I wasn't asking why I was making those things, or if I should be making those things at all. And so from that moment, I'd kind of started diving deeper into computer science ethics and computer science ethics education and fairness and machine learning. And I started going into rabbit holes for artificial intelligence and machine learning. And that's how I ended up here.


ELEANOR DRAGE:

We're going to ask you more about education and your role in it. But first, we're called the good robot. So we ask everyone our million dollar question, which is what makes a technology a good technology? And is good technology even possible?


JESS SMITH:

This is a million dollar question. And I had to think about this for a long time. So I personally don't think that there is one instance of good technology that is, that encompasses everything, every technology. I don't really think that that exists, because I think that good is relative and subjective. And so what's good for some people might be bad for other people. But I do think that there are instances of good technologies, so I do think that technology can be good for some people, or for individuals or for a certain population, or community. And I don't think that we should refuse to continue to create technology just because we think that it can't be wholly good for everyone. I think that there's still a way for us to work towards creating good technology. So how we work towards good technology, I think is also highly subjective and that's why it's very exciting that there's an entire field of human computer interaction experts and critical scholars and responsible tech ethicists. But I think for me personally, and what I've kind of pieced together over the course of the last few years of my own research, is I think that good technology is technology that's transparent. And the reason, one of the reasons why I think transparency is one of the most fundamental features of a good tech is because it recognises that the technology isn't going to be perfect for everyone. So rather than hiding that fact, transparency allows the technology to let users know what is going on, and what might be good or not good for them and let them make their own decisions. So it gives the user agency if we give them the information that we have about the technology. And I also think alongside transparency, accountability is really important. And also, this is another huge buzzword because I’m not a big fan of when people in the Responsible Tech Community say good technology needs to be ethical and transparent and accountable and that's it. I kind of like to explain a little bit more what that actually means. And so accountable technology, for me, means not just in terms of like policy and regulation and governance but it also means creating a technology that allows users to be empowered, and have the agency to make their own informed decisions. And the last thing that I will say is that, for me personally, I think a good technology is something that takes into account human well being. I think right now, technology is created with capitalistic incentives and that ends up accidentally or intentionally exploiting humans and some of the worst parts of humanity. And I think that if I was to decide ways to recreate technology in a, quote, better way, then I would hope that it takes into account things like happiness and human welfare and kindness and caring and health and well being. So that's my spiel. Hopefully, this is possible, I think it's something we can work towards. I don't think it's an end all be all. And I don't know how to measure when we're there. But it's a starting point.


KERRY MACKERETH:

Absolutely. So I want to ask you a bit more about what kinds of harm do you think could result from the absence of effective education about new and emerging technologies?


JESS SMITH:

Oh, my gosh, well think about every single negative consequences ever happened, or every single technology scandal that's ever happened in the last, I don't know, even just 20 years. And I think all of those are resulting from a lack of education, about emerging technologies and about ethics in technology, because those people were trained how to make things but they weren't trained to critically analyse and question the impacts of the things that they were making. And I think that this education is twofold. I think that it's, it's like a two pronged sword. So you have to educate the people who are creating the technologies to think more critically about impact. But you also need to educate the users to understand the technology better to have that agency that I was talking about before, that I think is so important to be able to make informed decisions on a platform and to not end up being exploited accidentally by a platform. And so the words that are coming to my mind are like data literacy, digital literacy, computational literacy, algorithmic literacy, like understanding these things that we're interacting with everyday. I mean, our phones are in our hand, I don't know, maybe more than even half of the day are in our pocket or attached to our bodies. But we don't even understand the intricate inner workings of this device. And we don't understand the intricate inner workings of all the algorithms that run the platforms that we're constantly scrolling on. So I think that it's like the sense of awareness that you build towards, like, understanding what it is that you're interacting with, and spending all your time doing every day. And the harm that I think comes without this lack of education is a lack of power, because I think, in my mind, education is synonymous with power. And if you have education, if you have knowledge, then you have the ability to make change. So if you have the knowledge as a computer scientist about the impact of your technologies, you have the ability to tell the CEO of the company that you're working at that something needs to change, or try to make that change from the inside out. If you have that knowledge as a user of a technology. And you recognise that, oh, maybe this platform is mining all my data and exploiting it and sending it to third party advertisers. Then you have that decision to opt out or you have that decision to change your privacy configuration and your settings that are so easily hidden from all users because they assume that users don't really care about it, when in reality, if users were able to make that change if they had that knowledge, and they would care about it, as long as it didn't require scrolling through, you know, dozens of pages of Terms of Service, because it's so nicely hidden. I do have one example of, of this power that I think was really striking to me about two years ago, when I learned that Google search is personalised. That was like a huge moment for me when I realised that this technology that I assumed was neutral for so long, like Google search, it houses all of our information. It's like the library of the world, of course, it has to be neutral. When I learned that it actually caters the results that you get, according to where it thinks that you are in the world, who it thinks that you're spending time with and what it thinks that your interests are, and what's most relevant to you, that is super powerful. It's like entering a library and then only being directed into one section of the library without having any knowledge that there's 20 other floors that you could be visiting. So I think that this is, it's all about knowledge, it's all about education, and education is power, and power is what provides people the ability to gatekeeper and leave people out of the conversation. So, yes, it's very important.


KERRY MACKERETH:

I mean, absolutely, I think you're completely right. And it's very frightening, isn't it? And it reminds me of Ruha Benjamin's fantastic work, right, Race after Technology, this idea of the New Jim Code, as she calls it, operating under this veneer of objectivity that hides or is understood to hide these very deep racialized power relations. And that's what I wanted to ask you a little bit more about which was to say, why do you think that there's still so many myths and misconceptions like this veneer of objectivity about new technologies, and especially in relation to your speciality, which is artificial intelligence?


JESS SMITH:

Oh, my gosh, it's because nobody actually knows what AI is. Even AI experts can't really explain AI in English, it's, it's such this like, convoluted concept that exists in the ether that people just constantly reference but nobody actually has pinned down like, we don't really even know what intelligence is, how are we supposed to know what artificial intelligence is? And nobody agrees on what any of these things are either. Like you can, if you're a computer scientist, you might say AI is, you know, reinforcement learning, supervised learning, unsupervised learning, like you might start spitting out all this technical jargon. But if you're just an everyday lay person who has never been trained in computer science, what is AI to you? I mean, I, I don't even know how I would explain that as just a typical user, I think I would probably just say it's, it's something that's smart, that runs on my technology that knows me as a person. And that I mean, that's a little bit scary too. So I think, personally, because I've, I've lived on the side of the computer scientists, and I've been in those circles, in those communities for a little while now. From my experience being on that side of things, I do think that this confusion is somewhat intentional sometimes. And this is something that I've talked with a few people about, like Anna Lenhart, who's worked with the tech Congress in the USA and, and there are tech lobbyists, where their full time job, 24/7 is to go and basically confuse Congress people and to confuse the general public about technology so that no change can be made. And so there's these people who literally have as their day job, and sometimes their night job, to just try to use as much jargon as possible and to try to confuse people so that there's no way that they could understand anything to make any sort of meaningful change, and so that there's no way they could understand the possible harms that are happening because of the technology design. And if you don't have, like I was saying before, if you don't have the education, that awareness, that knowledge, you can't fight back. And so there's people who are intentionally gatekeeping and stopping that fight back with this knowledge. And so I think that it's unfortunate that there's confusion because in my opinion, AI is, it's such a beautiful thing, it can be, it just this very, it's I don't even really know how else to explain I mean, it's, it's just statistics put into practice. But I think that when it's used on human data, it has the ability to do what feels like magic. It's, it's really, really powerful. And I think that it can be really amazing and used in really good ways. But it can only be used and implemented and created and designed by the people who truly understand it, or at least understand it well enough. And that's unfortunate because I think that there's a lot of people being left out of the conversation and being left out of the ability to to utilise AI for good because of this unfortunate gatekeeping of knowledge.


ELEANOR DRAGE:

That's fascinating. And actually some things that I did not know, horrific and completely scary. So what kinds of methods, strategies and approaches to algorithmic literacy, to making people able to perhaps not read but at least understand or know something about how machine learning functions, do you think are particularly promising and effective? What should we be looking at?


JESS SMITH:

Yes, I love this topic. So I think that some of the most interesting and exciting things I have encountered in the world of algorithmic literacy education is in K through 12 schools and primary schools. Because I think it's just like learning a new language if you teach people how to speak Spanish, or if you teach people how to speak English, or Chinese or whatever language you're teaching them, if you teach children, a child that at the age of two, then they're much more likely to be bilingual, trilingual, and it kind of just come second nature while our brain is forming. And now if you teach people how computers work, how algorithms work, how to code at such a young age, then it's just another language that they can add to their mental toolkit for later in life. And I think that it's not just about teaching children how to be computer scientists, but it's about being a mindful user of technology as well. And so I wish when I was in ninth grade, and I got Facebook for the very first time, I wish that somebody had told me what was happening with my data, I wish that somebody had told me to not post all of the really embarrassing things that I had posted that might be up there for all of eternity, I wish that I had known what data ephemerality was, I wish that I had known all these things, so that I could have been a more intentional user of technology, but I didn't. And now that information is out there, probably forever, because it exists on multiple sites and with multiple third party vendors. So if we can educate children at a young age, about how, what is happening with their data, what's happening online, how this online world is curated, and how it might possibly shape them, then they can take that knowledge with them and be intentional, responsible tech users for the rest of their life. If they don't like the way the platforms are made they have the ability to ask for change and to protest. And also if they really don't like how things are, and they really want to make change outside of activism, then maybe they'll become computer scientists themselves. So I think that the first and foremost, we should focus on education efforts at a young age. And I think that secondary to that, we should also be teaching people through technologies. And this is actually some work that I've started in the last year, is using transparency and technology as a tool to educate people about how that technology works. So for example, on Tiktok, this is something that I noticed when I first got the app, they had videos on the homepage that were teaching in a really educational and effective and engaging way, teaching the users how the app worked, and how the algorithmic curation process worked for the Discover feed that you get, to that there's basically like the, the, the biggest part of Tiktok that everybody goes there for. And unfortunately, I don't think that they explained everything. And it might have not been 100% honest, I'm not going to make any, like, intense claims. But I do think that it's a good start to at least inform people that an algorithm is at play and what an algorithm actually is, and how their data and their interactions and their engagement on the platform are going to come around full circle and alter the way that they experience content or information or anything on that platform. So I think that those two, those two approaches are the most promising that I've seen education at a young age and education through the technology platforms themselves once we get tech CEOs to actually be okay with people understanding what they're doing behind the hood.


ELEANOR DRAGE:

So how are you thinking about how a computer ethics syllabus needs to be constructed? What can you tell us about this programme and what you think is important to have in it?


JESS SMITH:

Yes, definitely. Also, just a side note to this is kind of a little bit of a debate in the responsible tech community as to whether or not computer scientists should be the ones who are responsible for learning ethics, even if it's just at like a, more of like a breadth and not a depth level, just to understand the basics or if we should just be training people who are total experts in both ethics and computer science and have that depth of the intersection there. So it's, there is this interesting debate happening as to whether it's actually useful to train computer scientists who don't want to be doing ethics to do ethics or not. But I am of the opinion that I think it's useful to teach anybody ethics or at the very least ethical decision making, and especially people who have a lot of power like computer scientists. So that being said, that's the motivation here. I think that in the curriculum that I have been working to develop, and also just in course, modules and the classroom setting for computer scientists, one of the biggest things that I think I focus on and that I think other people should focus on as well, is that there, I'm not teaching students what is wrong and what is right. I'm not telling them, okay, be a utilitarian, and always try to get the most good for the most amount of people or mitigate the most harm for the most amount of people. Because I don't want to assume that utilitarianism is an ethical framework that is good for all people around the world. And it's also just not true, because that's not the case. So instead of teaching them what's good or what's wrong, I think it's good to teach the students to have them discern for themselves, what is good is wrong, what is good and wrong. So to teach them, this ethical discernment, this ethical decision making so that they can make the decision for themselves, and they can have awareness about the impact that their actions might have on other people. So that's first and foremost, I think one of the biggest things that I try to nail down in these courses is that the students are learning things that have good impacts on other people and things that have negative impacts on other people. And then they make the decision for themselves ultimately about the kind of good or bad that they want to unleash on the world through their technology. And I think the second thing that I try to teach in these modules is that no technology is neutral. And if you have heard of Langdon Winners, “Artefacts Have Politics”, this is like one of the seminal pieces of the responsible tech community. And in that paper, he basically argues back in the 1980s, that no technology is neutral, even if we make it to be neutral, or even if the intention is to be neutral, when it impacts people, nobody is impacted in a neutral way. And so that is another fundamental piece of the education, I think, is teaching computer scientists that even if they're just designing websites, you still are making an impact on people. Even if you're just a UX designer, and you're designing buttons, you still need to think about ethics and responsibility and accountability. And this actually allows, this is the motivation for computer scientists to stop saying that their work has nothing to do with ethics and it also, it kind of incentivises computer scientists to stop saying, if I don't make it, somebody else will, because that's like one of my least favourite arguments that computer scientists make. And it makes them think more critically about well, maybe if I don't make it actually, I can stop other people from making it to like this. We don't live in this inevitable deterministic society where technology is going to progress and going to progress in a bad way no matter what we do and so we just have to like, hang on for the ride. No, we are intentional human beings, we are actors in this world and we can make a difference, we can make change, we can make decisions. And so that's actually the last fundamental piece of the curriculum that I try to nail down with students and It's that we have the agency to make the best worlds that we can, or at least to make the world that we hope to live in. So it's up to us as computer scientists, which is what I say in the classroom, but ultimately, actually, it's up to us just as citizens of the world, to first figure out what kind of world we want to live in, and then to create it. So that is like the, the final module of the course is to let people know that we shouldn't just be sad and sit down with this, this, thought of hopelessness and that we can't change the world and that we're all doomed and all technology is bad, but that we do you have the agency to make change and because we have the tools to make change, we should do it just, you know, being mindful of the decisions that we're making.


KERRY MACKERETH:

Absolutely. And it sounds like you're doing such fantastic work. And it's really lovely to get to hear about it. But finally, just to kind of bring things to a close, we came to know you because you're also the co-host of the radical AI podcast, everyone listening, please check it out. It's absolutely fantastic. So could you tell us a little bit more about your podcast, why you started it? And how the podcast has shaped or changed the way that you think about AI?


JESS SMITH:

Absolutely, yes. So I am the co-host and co-founder of the radical AI podcast, and I run it with Dylan Doyle, he is also in Boulder, Colorado. And we started this podcast, out of a little bit of frustration on a few counts. And for me, it was first this frustration that mainstream AI seemed to lack discussions about ethics and impact, and lacked diversity of thought. And nobody was talking about those really sticky, uncomfortable issues that were just that are so important, and so fundamental and foundational to AI systems. And also it was out of this frustration that podcasts that mentioned ethics were often from companies like large tech corporations and what they were saying was often ethics washing, or giving some sort of lip service without talking about the, actually the important issues that they were talking around. And so in our podcast, we wanted to tackle those uncomfortable issues that weren't being talked about, like racism, and sexism, and discrimination and oppression, and all the systemic issues that underlie and are the foundation of every technology that is created and play a huge role on how AI impacts society. And so that was, that was like, our motivation for why we started was to actually start talking about the things that people weren't talking about, but that everybody was thinking and everybody was experiencing. And I think that through that work, and through uncovering some of these sticky topics, and some of these really important issues, I've started to learn more and more and more how much of a socio technical system AI is. So basically, that means that AI influences society equally, as much as society influences AI. And so AI exists in this cyclical system, where humans and technology are constantly interacting and shaping each other. And you cannot work on one without the other. So you cannot try to begin to understand society without beginning to understand the technology and how it has shaped us. And you cannot try to begin to understand AI or to begin to understand the possible negative consequences and impacts of AI without first understanding how we as humans have shaped the technology and how we continue to shape the technology. And that's something that's a theme that just keeps coming up in my life over and over again, especially through this podcast project. And it's, it's a topic that we unpack pretty regularly on the show. So if anybody's interested in those kinds of topics, which I'm sure you are, if you're listening to this amazing podcast, definitely check it out.


ELEANOR DRAGE:

Brilliant, thank you so much, Jess, and your co-host and co-founder, Dylan is also really lovely, and we loved having him on the podcast too. The computer science community is very, very lucky to have you teaching and creating these new syllabi. So thank you very much for joining us, and we'll hear more from you very soon.


JESS SMITH:

Absolutely. Thanks for having me. This was a blast.


9 views0 comments

Recent Posts

See All