top of page
Search

Maya Indira Ganesh on the Myth of Autonomy

In this episode, we chat to Maya Indira Ganesh, the course lead for the University of Cambridge Master of Studies programme in AI Ethics and Society. She transitioned to academia after working as a feminist researcher with international NGOs and cultural organisations on gender justice, technology, and freedom of expression. We discuss the human labour that is obscured when we say a machine is autonomous, the YouTube phenomenon of ‘Unboxing’ Apple products, and why AV ethics isn’t just about trolley problems.


Maya Indira Ganesh is Course Co-leader (MSt) of the Master of AI Ethics and Society programme at CFI. She is a media and digital cultures theorist, researcher, and writer who has worked with arts and cultural organisations, academia, and NGOs. Maya transitioned to academia after working as a feminist researcher with international NGOs on gender justice, technology, and freedom of expression. Hence her work has consistently brought questions of power, justice, and global inequality to those of the body, the digital, and knowledge.

In 2021 Maya submitted her Dr.phil dissertation in Cultural Sciences (‘Kulturwissenschaften’) at Leuphana University, Lüneburg, Germany. Inspired by Baradian and Foucauldian notions of apparatuses, her research traces the material and discursive vectors in meaning-making and knowledge-production that bring AI/autonomous systems into being as putatively ‘ethical’, ‘autonomous’, or ‘intelligent’. This work grapples with the case of the driverless car that exists simultaneously as a 20th century automobile, planetary-scale computational infrastructures, and AI/robot imaginaries. Maya’s dissertation investigates what initiatives for governance of such a complex technology implies for human social relations, spaces, and bodies. Hence, a central concern in this and in her other works is that of the shifting and unstable place of human communities in highly data-fied, digital, and nonhuman worlds.

She continues to write and speak about digital culture, and be associated with feminist NGOs, and media arts and cultural festivals and communities. A full list of her writing, projects, and networks can be found here.


Reading List



Moral Machine project: https://www.moralmachine.net/


Ganesh MI (2020) Complete machine autonomy? It’s a fantasy! In The Rockefeller Foundation (Ed), AI+1: Shaping our integrated future. New York, The Rockefeller Foundation.


Ganesh, MI (2019) You auto-complete me: Romancing the bot. In Deepdives B Datta and R K Padte (Eds) Mumbai, Point of View. https://deepdives.in/you-auto-complete-me-romancing-the-bot-f2f16613fec8?source=collection_home---1------0-----------------------


Transcript


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

Today we’re talking to Maya Indira Ganesh, the course lead for the University of Cambridge Master of Studies programme in AI ethics and society. Maya transitioned to academia after working as a feminist researcher with international NGOs and cultural organisations on gender justice, technology, and freedom of expression. Her work has consistently brought questions of power, justice, and global inequality to those of the body, the digital, and knowledge. She has published widely on the fantasies of machine autonomy, AI policy and algorithmic justice, and is the creator of the fantastic AI is for Another: A dictionary of AI, the entries of which show how AI shaped in terms of the human. In this episode we discuss the human labour that is obscured when we say a machine is autonomous, what happens when we unbox iPhones, and why AV ethics isn’t just about trolley problems. We hope you enjoy the show.


KERRY MACKERETH:

Thank you so much for being here with us today. It's such a joy to have you on the podcast. So just to kick us off, could you tell us a bit about who you are? What do you do? And what's brought you to the topic of feminism, gender and technology?


MAYA INDIRA GANESH:

Hi, thank you so much for having me on your podcast. My name is Maya Indira Ganesh and I am one of the new co-course leads on the new MSt. That's a Master of Studies programme at the University of Cambridge. It's a Master of Studies about AI ethics and society. And there's three of us leading this in its first year and I'm one of the three.


KERRY MACKERETH:

Fantastic well they are some lucky students that are going to have you as one of their core supervisors. So I also was wondering, we talk a lot on this show about feminism and feminist approaches to technology. But we also recognise that there's multiple feminisms and that feminism means really different things to different people. So what does feminism mean to you?


MAYA INDIRA GANESH:

So I think that feminism is a home. I think it's the place where ... I think it's kind of like a political home. But it's also a home of friendship and companionship, it's a place where I, where I think I first learned as an adult about politics, about creativity, about camaraderie, I think it's where I learned to ask certain kinds of questions about myself, about the world, about language. And the thing about a home is that you can leave it and many of us are lucky enough to be able to go back and I really think of feminism as that place where I started doing a lot of work. I spent my 20s working in feminist collectives and organisations supporting women and children and queer communities in Delhi, who are facing violence, social, economic, political, interpersonal, and sexual violence. And those were very formative and possibly also very scarring years, because it's hard work. But I think that what was important was ... it wasn't just about, you know, supporting people to deal with violence and the legal system or whatever it was about asking those questions through feminism about and so feminism as a medium as well about how does violence manifest? Where does it come from, and who has the power to, who has power over? And so when I say that, you know, when I use the metaphor of home, I'm not using it lightly, but I think that it's, it's been a very difficult place of learning as well. And, and where I'm still learning, because it's not just about learning to ask the questions, I think it's about learning to live with the answers to those questions as well. And I'd like to think of this home called feminism as, as sort of a place of doubt and love, because it's, because that's what home is like, but also, it's … feminism makes us doubt love, and it makes us love doubt. And it makes us doubt the things we love. But it also makes us love without doubt, and how to live with both doubt and love. And I'm not gonna like, you know, worry that combination of words anymore, but I think that should give you a sense of how important feminism is to me.


ELEANOR DRAGE:

That's really beautiful. Thank you, and certainly something I've been thinking a lot about. Recently, love is complicated. So on this podcast, we are called the Good Robot. And we have this quartet of difficult questions that I think you're the perfect person for us to respond to. So what is good technology? Can we have good technology? What does good technology look like? And how do we work towards it?


MAYA INDIRA GANESH:

Right, so I found this question really good because it's one of those words - good - the word good is so divisive. And it's so sensitive, and it's kind of complicated. And, and at one level, it's like, you've also decided to call your podcast, the Good Robot, and maybe we should have another conversation about why exactly you call it the Good Robot. I mean, I'm very curious about that. But it's almost quaint, this word good. And, and for that reason, I think it's also provocative. So I, when I was sort of thinking about this question, for some reason, the first thing that popped into my mind was this... I mean, really, really unexpected thing of the moment of, you know, the iPhone unboxing. It's become sort of like this cultural moment where a new iPhone comes out, and there's people lining up outside shops to wait to get it and then there's entire YouTube - they’re almost like TV shows where people, you know, sort of record themselves lining up outside the store, and then they get it. And then they and then there's that moment of the reveal when they open the box and they get the iPhone. And it's supposed to be this moment of a great revelation, the end of a journey to you, that begins and ends with the opening of the box and the ooing and aahing over the features of the phone, a phone which is actually one of the most opaque and difficult to break technologies. For this reason, the iPhone is very good technology, it's extremely secure; security experts will tell you that you should use an iPhone because it's so stable and secure. It's powerful. It's efficient, it's beautiful. It's slick, but it's goodness and its power relies on it being absolutely impenetrable. And, and I think when I think about technology in terms of these devices, I think the place where I really learned to sort of see them differently was through my years working with communities of hackers and culture jammers. And I used to do a lot of work in sort of digital, digital rights and civil society organisations focused on digital security and privacy, particularly for human rights defenders, particularly feminist activists, and journalists. And what I learned from communities working in that field was that a technology becomes good when you actually make it yours, when you mess with it, when you don't accept it for what it is, but you look under the hood and lift the lid. And weirdly, this moment of the unboxing, even though it is literally about the opening up of a box, it's actually an acceptance of the impenetrability of that technology. It's not at all about lifting the lid. Now, of course, to sort of lift the lid on a technology and look under its hood takes a lot of skill. And we can learn those skills. But I actually think it's just about this curiosity to ask certain kinds of questions. And this is where I sort of link it back to, to feminism and being able to ask questions to ask, you know, how does this technology work? Why does this work in a particular way? Who designed it like this? How can it work for me, am I the person this technology is intended for? And actually, it's, I think that there's a very particular kind of user in mind when these technologies are architected. And through my work, I've come to understand that it is the most marginal person in society who is not the intended user of these technologies. And I can come back to that later. But I think just to stick with this idea of what makes a technology good, I think it's that combination of wonder, but also disregard, and knowledge that whatever it is, I'm going to figure it out. And so I remember that, you know, working with communities of hackers and cultural dramas, and I use the word hacker and jammer in the most broad sense, many of these people are, you know, have technology skills, but also I think, many, many people who are hackers and jammers are self-taught. And they only learn through constantly doing, through constantly loving technology, but also messing with it and saying, I want to interrogate almost my own excitement with it. And then there are also people who, you know, without the technical skills will look at that iPhone and want to hack its history and say, how does it work? And why is it sort of architected in this way? For instance, there are artists who will - like Ingrid Burrington is one of those people who comes to mind - who will sort of like literally break down an iPhone to sort of understand that, you know, it actually comes from the Earth. And there are certain pre-histories to the Earth being excavated for technologies that, for minerals that go into the iPhone. So that's one way of looking at sort of asking this question of is a technology good, sort of like going back to the source looking at its place in the world, and what effects it has and unpacking it. But I think the other kind of goodness that I also want to highlight is something we don't do enough of which is, what is the joy or abundance that this technology offers us? And you know, many of us are very good at asking critical questions about technologies. But I, but I have been thinking a lot more about, what does joy and abundance mean in the making of a good technology?


ELEANOR DRAGE:

And that's just the hardest question to answer. And we were talking recently to Frances Negrón-Mutaner who's a filmmaker, and she was talking about decolonial joy and a project that she had, which created a technology that allowed people in Puerto Rico to respond to questions of what they valued. And that was really interesting. And thanks so much for bringing up this kind of expansive AI development and deployment pipeline because we also define, well define technology as what it's made of the processes of extraction, the workforce, the labour that goes into producing these kinds of technologies - that's so important for us. And that's part of our provocation of the Good Robot, which you identify too - it really is a provocation. And it's kind of pithy and cute, and can move in many directions, but it’s something that we question more than anything else, rather than just buy into uncritically of course. So you talked a bit about hacking, and kind of delving into what's inside technology. And part of that is delving into what makes us human. You know, when people look inside the technology we're also trying to find out about us, there's something about technology that also is revealing about humanity. And you've done some fantastic work on autonomy, and the relationship between ideas around human autonomy and machine autonomy. So I wanted to ask you about these myths that we tell ourselves about AI specifically that AI will fundamentally threaten human autonomy. It's something that transhumanists thinkers like Nick Bostrom are very concerned about. But you've highlighted in your work The Ironies of Autonomy, in a fantastic paper that you published in Nature. And you encourage us instead to think about how machines and humans are co constituted: so they make each other. So what do you think of the problems with making autonomy, a guiding principle in AI ethics? And what does it mean to think about humans and machines as making one another?


MAYA INDIRA GANESH:

Yeah, thanks for sort of bringing up this idea of autonomy, I think as somebody who has worked with different kinds of civil society or political movements, this question of autonomy has meant so many different things over time. And I mean, it can mean self-determination. And I think that's a very powerful way of understanding it. But I think that another meaning of it is that there's a sort of fetishized individuality, or separateness, separateness that comes from the application or sort of the development of autonomy in the context of AI technologies. And in this work that you mentioned, here, the paper The Ironies of Autonomy, I was actually playing on the work of a scholar called Lisanne Bainbridge who wrote a paper in 1983, called Ironies of Automation. And what she talked about in that work was about how different kinds of automatic control systems were being put in place into various kinds of industrial or manufacturing contexts that is, so the automatic control systems that were being put in place, because they could do the job better than the human operator. And this is the whole history of industrialization, as well. But what Bainbridge talked about was this irony that even though the automatic control system is better, the human operator has still got to be there to monitor that this supposedly better system is working effectively. So actually we've been in this long term, long time loop with our technologies in a way that we're very sort of, kind of part of them. And I think anybody who operates any kind of machinery from a car to a, I don't know, a can opener, you know, that there's a way that you have to sort of sometimes in a very bodily way, almost you can feel that sense of being part of the technology you're working with. I mean, scholars have talked about things like walking sticks, and disability activists and scholars will talk about embodied technologies that become sort of that - they become part of our bodies and that's how we use them, even reading glasses. You know, they they work because they're sort of like part of our bodies and it changes how we see them and you have that feeling of, you know, when you're when you put a new pair of glasses on for the first time, you immediately see the world slightly differently. And what I was looking at in my doctoral research was how - in this technology called autopilot in the emerging driverless car - and the autopilot technology, and we know this from aviation, we, you know, it's it's the thing which makes the driverless car appear like it is driving itself, that it is autonomous. And actually, that's, that's not at all true, because the so called autonomous vehicle actually requires humans a great deal. And, and I think, what's an interesting myth that gets sort of wrapped up in this idea of the car being autonomous is that the human is autonomous, that the human driver is autonomous, and then it just gets sort of translated into the car. And, and I don't think that humans are autonomous at all, and kind of neither are cars even and how we drive them. So what happens with with the autonomous vehicle in sort of making it appear like it's driving itself, is that first sort of unpacking that how the car actually drives before you put it on autopilot, is that it requires a very complex array of data infrastructures, like sensors, like LIDAR (Light Detection and Ranging), like mapping software, and all of these technologies are sort of inside the car, and they're in the cloud, and they're allowing the car if you think of it as computation, to perceive the environment. But the car that senses and perceives the environment cannot make sense of the environment. So it needs … so for example, like the images that cameras and sensors on a driverless car will pick up, need to be actually parsed by a globally distributed set of humans working for various companies that will actually tag and annotate these images. So it's not like usual companies like Amazon Mechanical Turk, or crowdflower, you know, the sort of crowd work companies, but there are actually quite specialised companies now that will look at let's say, driverless car images, or images that driverless cars are picking up and actually tag and annotate them and sort of send them back or sort of constitute these large libraries. Because - and cars can then share that amongst themselves through their cloud connections. Or you even have the case of Captures, I mean, these are things that everyday humans are doing - humans on the internet often have to verify the human status by, you know, looking at a series of images, like stairs, or street lights. And, you know, if you want to complete some kind of digital transaction, you've got to identify all the stairs, identify all the street lights. And what you're doing in that process of, you know, looking at those captures and identifying things is you're actually tagging images for future driverless cars. If you notice, those captures are usually about the external environment. It's never about, you know, things like I don't know, chairs, or buttons, or sorry, I'm just looking around and things around me, you're looking at context for the world that the driverless car, the future driverless car would inhabit. So actually, we are also part of this complex computational “cognitive” system - and I'm making air quotes around many of these words like, like cognition, because I think that machine cognition is quite specific in that, as I've been explaining, it actually requires humans who understand and know the world to make sense of the world. So a car can then look at something, look, again, in quotes, in air quotes, can look at something and say, Oh, that is a cyclist, that is a street light, that is a pedestrian crossing. But otherwise, I mean, a driverless car is computation that doesn't know these things. So for a car to be autonomous, you actually require all of this complex stuff to be going on in the back. And so when it goes into that mode of autopilot, and the car is driving along, there are all of these systems that we never sense that are sort of in play. But there's also this, you know, coming back to Bainbridge and these ironies of automation and autonomy, you've got the human driver, and at this point, we still have to have human operators in driverless cars. You've got the human driver, who has put the car into autopilot but is in this weird situation of having to be relaxed and alert at the same time. So we have the driverless car because we're told that it is safer, computation makes decisions faster and better than human drivers. Computational cars don't get distracted, they don't get drunk, they don't get sleepy, and they don't text when they drive. But what happens is, if there is ever a problem, the human driver has to leap back into control and take control and make sense of the environment or something in the environment that the car cannot parse, that has fallen outside of its known libraries of, you know, the world. And … but the weird thing is, it's hard for humans who are already relaxed to jump back into a state of alertness. And this is where a number of car crashes have happened. Because humans, because of the superior computation, have become more relaxed and are not paying attention to the road. But when autopilot sort of, or the car sort of tells the human Oh, there's something I can't deal with the human has to jump back. And that gap, that delay in taking control back is when a lot of crashes have happened. So, I mean, you people know this feeling very intimately of, you know, when computation does things, you lose skills, I mean, simple things, like, if you've been using a calculator all your life, you actually don't know how to add things up anymore, just on your own, in your head. So magnify that, you know, like, you know, to the scale of driving, and especially the stakes of driving, and you'll see how you'll see how complex something like autopilot is, and what it means to actually have something that is, quote unquote, autonomous, is, I don't know, we have to unpack some of these words, when you look at the extent to which you actually have the human required to fabricate that illusion of autonomy. I think, I mean, I'm also just gonna say this one other thing, and you can decide to keep this in or not. But I think the implication of this construction of autonomy, the marketing of it is, eventually responsibility. And this is part of the ironies of autonomy that the computational system is supposed to be better than the human, it still requires the human to make it work to make it appear autonomous. But when it comes to accidents and crashes, the human is always caught in what the scholar Madeleine Elish calls a moral crumple zone, which is really evocative in the case of driverless cars because the computation is better, but you're still considered to be - you are more responsible, of course, only a human can be held responsible. So this idea of liability is also about, especially in the case of you know, when there's a tragedy, when there's a fatality. The idea of liability is also about going to a particular court of law and being able to prove responsibility. But I think all of the discussions people are asking right now about things like iPhones and where they come from, and the interconnectedness of things, of our technologies and humans is, why is it down to the split-second moment of that decision, when actually, the entire architecture of the system is based on certain data sets is based on, you know, crowd workers who are working in in conditions of poor labour and on push contracts, so you can make it about that moment of the crash, or you can make it about the entire system. And I'm not saying, oh we shouldn't have driverless cars. I don't think that's the right question to ask. But I think that the centering of accountability into that moment of the crash is really problematic. And we don't use that in other kinds of car crashes, you know, that have happened.


KERRY MACKERETH:

Yeah, of course, that's so fascinating. And something else that you've talked about is how it's misleading to reduce ethics work in relation to automated vehicles as trolley problems. So could you tell us why it's important to treat histories of the so-called trolley problem, and why it's not good that this is the go to problem people think about when we say AV ethics?


ELEANOR DRAGE:

And tell us what the trolley problem is! For those who don't know, my version of it probably isn't the correct historical version. So help us out with explaining what that is, as well!


MAYA INDIRA GANESH:

Sure, I'm happy to talk about it, especially since I've just spent five years thinking about the trolley problem and why it is that it has become so popular. So the trolley problem is an adaptation in the driverless car context of something actually that an Oxford University philosopher called Philippa Foot came up with, based on the idea of the doctrine of double effect. And I'm not going to go into that now, but I think the original version of the trolley problem as set out by Professor Philippa Foot was of a train trolley going down a track, and there's a fork in the track, it forks to the left and to the right. And now this train trolley is rolling down the track happily and suddenly its brakes fail and you can't stop it. Now, the trolley has a lever in it, and the lever will force it to go either left or right, it's not going to stop, but it will go left or right. And there is obviously a human who is encountering this philosophical but also very physical problem of the trolley without brakes. And the only thing you can do is allow it to go left or right. Now what Ms. Foot has decided is that there is there is on the left track, there is one person working on the track. And on the right side of the track, or rather the right fork, there are five people working on the track. Now the trolley, if you turn it to the left will obviously, you know, mow down one person and if you take it to the centre to the right, it will move down five people. So the question is, the question originally asked by Professor Foot is, Which way should you turn the trolley? And this is like a very specific kind of question that positions the the taking of the life of one person over that of five people. And this is you know, very much speaks to this utilitarian approach to you know, letting go of one person as opposed to five is better. That's the original form of the problem. But actually, there's another dimension to that which is not just the utilitarian aspect, or the consequentialist, as ethicists would say, aspect of this, but also the rationale of what are the rules by which you would make this decision and arrive at this decision of which way the trolley should go? And I think that is what became the provocation for things like autonomous vehicle ethics. And I think it's important to also say that, you know, when this discussion about AV ethics started, the trolley problem was always intended to be a provocation to think about the outcomes of, what would it mean to create a technology that was making decisions in a complex and fast-changing world? How can you actually be sure that you know, an autonomous technology, an autonomous vehicle would make decisions that did not result in the loss of human life, or other kinds of, you know, fatalities or tragedies. And so the trolley problem was a starting point for that. And I think it was meant to inspire engineers to think about how they would architect things like driverless cars. But what happened was that it became incredibly popular in the mainstream media. And I think this is somewhat regrettable, because it just started to shape the entire discourse about autonomous vehicle ethics. And I think many of the people involved in that world, you know, would say that it was always only meant to be a provocation. But the problem is, sometimes when you use something so provocative and amplified through the media, then it just becomes, it becomes the go to, you know, and this is how people sort of think about it.


I mean, there are all of these other versions or adaptations of the trolley problem that philosophers have come up with. And I think that they're all just sort of thought experiments. But I think people have taken this quite literally. So there's no end to TED Talks, but also XKCD comics about how to make decisions in this case of, you know, this, this moment of the accident. And there's quite a popular MIT Media Lab project called the moral machine that is sort of set up on the same premise that there's a driverless car with brakes that fail. But it takes that one step further and makes it a little bit more complex. So you have a driverless car approaching a crossing - a pedestrian crossing - with green and red lights, and the brakes fail. And in some versions of the problem, there are people in the car, there are children in the car, and in other versions the car is empty. But what becomes interesting is in place of the car sort of having to go left or right, or people on the track, you have different kinds of humans who are crossing at this pedestrian crossing in front of the driverless car hurtling down with its failed brakes. And these are, you know, everything from like, thieves running away to cats and dogs, to pregnant people, to old people. So you've got these different array of humans and nonhumans, who are crossing at the pedestrian crossing. And this project is called the Moral Machine. And it was actually set up as an online game. So you could go on to https://www.moralmachine.net/. And actually play this game as a sort of, and work through these 13 different scenarios and make decisions about how should the driverless car respond in this context. And they, they, they got lots and lots of responses to it also, because they ran it in many different languages. And I think there were 12 or 13 languages that this was run in. So they did it around the world and they ended up with a data set of 39.6 million responses to this provocation to this question. So even though the trolley problem started, as you know, just a provocation for AV ethics, it actually sort of sparked off a lot of questioning very much along the same lines, though, the shift that the moral machine project actually makes is not about how do you architect these decisions, but by putting together a very large data set. I think what's happened is they've said, you can actually arrive at a statistical explanation or statistical rationale for how the driverless car should respond because you actually have that many people who are responding to this question. And you know, people can sort of look at many of the critiques online about the moral machine. But I think what really fascinates me is how we have hinged the question of ethics around actually a moment of death. And who should the driverless car with the failed brakes kill? So the French theorist of drones Grégoire Chamayou actually refers to this as necro-ethics, not the ethics in terms of living well, but ethics in terms of killing well, or killing appropriately. And just like we, you know, like to outsource a lot of our dirty work to robots, like cleaning, or care work. We're also sort of outsourcing the dirty work of these complex decisions about who to kill, to think the unthinkable for us in a way to this computational system, or to people on the internet who will make decisions. And I'm not saying that, you know, these decisions, that these 39.6 million responses are actually going to architect any kind of thinking about AV ethics. But I feel that it has taken the scale of how we even think of ethics, I think it's sort of fostered into these really problematic questions about how a driverless car should make a decision about who to kill. And this goes back to my thinking about what is the starting point for our ethics? It's not about joy or abundance. It is about how do we think about legal liability? Or who is eventually accountable? And who can we … should we legitimately statistically sacrifice in the case of an accident? And I think you'll see this logic as well in a number of other technologies, like autonomous drones, like lethal autonomous weapon systems, you have this sort of killing happening at a distance, or there's logic for killing at a distance happening through the clean work of computation and statistics.


ELEANOR DRAGE:

Thank you, that was the most fantastic answer. So thank you so much for that, and it's been an absolute pleasure to have you on the podcast.


MAYA INDIRA GANESH:

Thanks. It was lovely to speak with you about my work. Thank you for the opportunity.


120 views0 comments
bottom of page