top of page
Search

Neda Atanasoski on AI, Racial Capitalism and Labour


In this episode we chat to Neda Atanasoski, Professor and Chair of Feminist Studies at UC Santa Cruz, about the relationship between technology, racial capitalism, and histories of gendered and racialised labour.

Neda Atanasoski is Professor and Chair of Feminist Studies and Professor and former Director of Critical Race and Ethnic Studies at The University of California at Santa Cruz. She is the author of Humanitarian Violence: The U.S. Deployment of Diversity (University of Minnesota Press, 2013) and Surrogate Humanity: Race, Robots, and the Politics of Technological Futures (co-authored with Kalindi Vora, Duke University Press, 2019). She is also the co-editor of a 2017 special issue of the journal Social Identities, titled“Postsocialist Politics and the Ends of Revolution.” Atanasoski has published articles on gender and religion, nationalism and war, human rights and humanitarianism, and race and technology, which have appeared in journals such as American Quarterly, Cinema Journal, Catalyst, and The European Journal of Cultural Studies. Atanasoski is currently the co-editor of the journal Critical Ethnic Studies and the founding co-director of the Center for Racial Justice at UC Santa Cruz.


Reading List:


Surrogate Humanity: Race, Robots and the Politics of Technological Futures (I do not have a full pdf, but I could provide you with pdf versions of particular chapters if that would be helpful)

Nakamura, Lisa. "Indigenous Circuits: Navajo Women and the Racialization of Early Electronic Manufacture." American Quarterly, vol. 66 no. 4, 2014, p. 919-941. Project MUSE, doi:10.1353/aq.2014.0070.


Curtis Marez, Farm Worker Futurism (University of Minnesota Press)

Iván Chaar-López, "Sensing Intruders: Race and the Automation of Border Control" (American Quarterly, June 2019)


Transcript:


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

Today, we’re chatting with Neda Atanasoski, Professor and Chair of Feminist Studies at UC Santa Cruz, about the relationship between technology, racial capitalism, and histories of gendered and racialised labour.


KERRY MACKERETH:

Thank you so much for being with us. Could you introduce yourself, tell us a little bit about what you do, and what brings you to the topic of feminism, race, gender, and technology?


NEDA ATANASOSKI:

Thank you so much, Kerry and Eleanor. It's such a pleasure to be here with both of you. I'm a Professor of feminist studies and critical race and ethnic studies at the University of California at Santa Cruz. And throughout my career, my research has focused on the intersection of racial politics and political liberalism in the US context. So my first book looked at humanitarianism and US humanitarian wars. And part of that research did actually think about technology in the context of war technologies, and especially how in the early years, drones were actually marketed as a humanitarian technology. So the interest in thinking about political liberalism, race and technology continued for me.


And when I was finishing that book, I was living in the Bay Area, and living in the Bay Area, which of course, is the capital of big tech companies, startups and venture capital, but a place that also has the reputation for being exceptionally liberal in the US context. It was really interesting to me to see how tech money was exacerbating inequalities.

And in the years, as tech was really growing, there was so much displacement, the proliferation of homelessness, that I witnessed, and really seeing how all of this wealth didn't actually extend to the majority of the Bay Area population. And so at the same time, my good friend and colleague, Kalindi Vora, who is a Professor of Women, Gender and Sexuality Studies at UC Davis, and I began thinking more expansively then about how technology is political, even though many people think of technology sometimes as disconnected from the political realm. And we were interested in tracking how the politics of what we imagine are good technological futures are deeply influenced by histories of racial and gendered hierarchies. And then this research culminated in our book Surrogate Humanity: Race, Robots, and the Politics of Technological Futures. So that book was thinking through how technology which gets thought about as bringing about revolutionary new possibilities, is tied to values that are about capitalist development, that reiterate a fantasy that has machines and algorithms and AI take over dull, dirty, repetitive and reproductive labour that had previously been performed by racialised and gendered populations, humans will now be free to be creative and do pleasurable work. But these technological futures are also very much racial fantasies. So. So we were thinking about that in Surrogate Humanity.


ELEANOR DRAGE:

Our podcast is called The Good Robot, and so we want to develop some of those themes. And with this in mind, we’d like to ask you whether you think there can be good technology, or if that's just a fantasy of techno liberalism? Can ethical technology exist and flourish if what has been called racial capitalism is still part of how the tech industry operates?


NEDA ATANASOSKI:

I want to start by saying I love the title of your podcast, The Good Robot. And actually one of the very first interviews that Professor Vora and I did for Surrogate Humanity was with the curator from MIT Museum's exhibit on the history of robotics. And this curator, her name is Debbie Douglas. And so Debbie told us that part of her work there at the MIT museum was providing good PR for robots, good public relations, because historically, there's been a fear of robots. So we thought this was really interesting that robots and all tech need PR to make us think of them as good. And, and so it's interesting to think about how in part good technology is a question of marketing, right? And in this instance, within racial capitalism, good technology is a product sold to consumers for improvement, betterment, efficiency, but then your question also makes me think 5:00 on the other end of the spectrum, there are movements that work to ban certain types of technologies because they're inherently unethical. So one of the one of the things I'm thinking about here is the active campaign to ban sex robots, as well as the campaign to ban killer robots. So the campaign to ban sex robots, for example, talks about how these AI enhanced sex dolls perpetuate patriarchal relations, misogyny, and even paedophilia. And they have written about how sex robots turned women into what they call programmable property.


But these kinds of approaches, I think, risk taking technologies as having an essence, a single function, causing a problem in and of themselves, rather than thinking about technologies in the context of their use, so technologies as relational. And crucially like other manifestations of carceral feminism, the proposed ban on sex robots, tethers protectable womanhood to racializing discourse. So first, this is more implicit in the campaign, there's a kind of universalization of the category of women and the fear of women writ large being treated as property that appropriates for unmarked or white womanhood, really, the, the sexual history and racial history of chattel slavery in the Americas, colonisation, imperialism. And then second, the campaign talks about patriarchal cultures breathing pathologized desires. And so it's really interesting to me how that campaign singles out Japan and China and Asian countries as the origin for sex robots. So yeah, I think it's not very productive to think about technologies as either inherently good or bad or ethical or unethical. But rather, we can think about the kinds of relations that technologies encourage, enable, and facilitate and then to the kinds of relations that they might prevent or preclude or, or, or make difficult.


KERRY MACKERETH:

That's hugely fascinating and it flows really nicely, actually to the next question I wanted to ask you, which is around what kinds of harm do you think results from the ways that we think about technology, and specifically the kinds of relations that we have with technological objects?


NEDA ATANASOSKI:

Yes, so this question, actually, I've been contended with this question of harm, and technology and technological relations, obviously, as all of us are in the midst of a global pandemic, thinking about how harm vis a vis technologically ascending relations has played out in this moment. When for some parts of the population, everything has become contactless, virtual, and then for others, right, they are actually enabling that virtual reality for those who are privileged enough to, to remain at home. So when I, when I lived in the Bay Area, I would drive across the Bay Bridge connecting Oakland and San Francisco. And there was a billboard that was so fascinating to me for Grubhub, which is an online food ordering platform and an app that provides delivery services that many of us are so familiar with now. And this billboard had a little robot on it, and it said because self delivering food, self delivering food is still in beta, use Grubhub. So the company was investing in research on drone delivery and self-driving cars. But actually Grubhub relies on this pool of low paid available gig workers. And so we can think about how the laboratory for technological futurity based in Silicon Valley startups paved the way for the technologised aspects of the pandemic, but rather than technology freeing the privileged and wealthy for more creative tasks, which it was sold as before, the techno revolution now, right, presents this moment where it's, it's delivering reprieve from risk. So in some ways, technology allows for a risk free life to be like a luxury good. And in you know, certain Whole Foods in the US now there are separate lines for Whole Foods shoppers, and it's you know, it's really very visible that that self-driving cars are not the ones delivering the food, right. And it's not robots picking from the shelves, right? It's people who are precarious and who have to expose themselves to risk. And at the same time, this virtual world for another kind of worker has just expanded the scope of work, right Zoom and other technological platforms, have made us permanently available for work. And, and so really, it's it's sort of the the dream of, of techno capitalism, right. That every moment of our lives can now be accounted for and made productive, and in some ways, so. So those are some of the harms I've been reflecting on in this pandemic moment, hopefully soon to be post-pandemic moment.


ELEANOR DRAGE:

Yes, you very much hope so. Another one of the harms that we're thinking about is to do with technology’s taxonomical capabilities, so the way that it sorts and categorises and how this plays into what you call the logics of differentiation. So we want to ask you about that and about why specifically you think people fail to recognise this?


NEDA ATANASOSKI:

Yes, so in Surrogate Humanity, Professor Vora and I defined techno liberalism as how difference is organised through technological management and use of the categories of race, gender, sexuality, and a fantasy of a technologically enabled postracial future. But recently, I do think there have been more and more news stories and documentaries and texts written about what some people have called racial bias within technology. And so I do think that is entering a more mainstream sort of discourse about technology. And the most often talked about example of this is facial recognition software, and how it's much less accurate in identifying black and brown faces. So of course, the norm on which such software was trained are white male faces, right. So some people call this racial bias in AI. And I think it's interesting. If the problem is bias, then it would imply that fixing or correcting the bias would be the solution, right. But in the instance of this well known example, fixing the solution would mean that AI can more accurately recognise black and brown faces, which, given the history of uneven police and other kinds of surveillance of black and brown bodies, is this capability really a desired outcome? Correcting the technology or the algorithm, so to speak, might actually just make the products better at policing, at surveilling.


So also, if technologies aren't biased, are they neutral? And, as I, as I said before, I think characterising technologies as good or bad, but also as neutral or biased, reproduces some of the logics of techno liberalism, and its seeming goal of bettering technology for an ostensibly postracial future. So again I think that, that it's important to think about the kinds of relations that technologies enable and encourage, and then also to think about how, how we talk about technology in relation to various kinds of racial and gender difference, what we want to call that.


KERRY MACKERETH:

That's so fascinating. Why is it so hard to fix the kinds of harms that you've identified, but also why do these particular racialized and gendered ways of thinking about technology become so entrenched?


NEDA ATANASOSKI:

I think this is a question about what is valued within capitalism, within racial capitalism: efficiency, productivity, but also convenience, speed, easy access to goods and services. My colleague from Georgia Tech Professor Nassim Parvin and I have been researching home tech or technologies that are used in smart homes. And one service we're looking at is called Cherry Home. And this is a home security system, that, that uses a separate processor and advanced AI to see what's happening inside a house, map the room that it's in, recognise the people in the room, tell user what the people are doing, and then send notifications if something seems off or is wrong. So for instance, the system can alert the user if someone falls or cries. So it's clearly intended for caretakers to monitor young children or older people. So on the one hand, this service has been called creepy, right, it's taking surveillance to a whole new level. But on the other hand, um, you know, this speaks to a lack of available or affordable caretakers, right, who can afford a certain level of care, who needs to rely on technology to look after loved ones, although even that is a level of privilege, right? Because the system is expensive. But it's, it's these, these technologies step into, into the place of infrastructures that are no longer there, at least in the US context, where social services have been, you know, completely gutted. So I think that the technologies are there, too. And they're marketed right to help but but that precludes a conversation about, you know, social and health and financial sort of safety nets and infrastructures.


KERRY MACKERETH:

Absolutely, and that example of Cherry Home, I think is so generative, because like you say, on the one hand, it poses so many, you know, tangible risks to people in terms of not just surveillance, but also the use of these kinds of technologies for intimate partner violence and domestic abuse. But also on the other hand, like you say illuminates this kind of same problem of labour, right, that your work, I think, really excellently foregrounds and that's why I want to ask you about next was thinking about this relationship between technology and labour. So how do new and emerging technologies both illuminate but also reproduce these gendered and these racialized histories of work?


NEDA ATANASOSKI:

Yeah, this, this question is really at the heart of what was driving the research for Surrogate Humanity, because so much of technological design and programming is based on assumptions about what will make our work easier, what will eliminate the kind of work that is not considered to be innovative, creative or rewarding. 18:32 And I mentioned the example of self-driving cars earlier. But there are so many other examples that show the kind of work that is imagined to be fully replaceable by technology. And this is work that has historically been performed by the enslaved servants, immigrant populations, wives who didn't themselves have servants and so on. 18:58 And there was a quiz that the BBC put out some years back in 2015, called, ‘Will a Robot Take Your Job? There are many jobs on the list. But I wasn't surprised when I looked up. University Professor, I discovered that I was very unlikely to be replaced by a robot. But of course, jobs like administrative assistant or bus driver, were categorised as highly likely to be replaced. And this imaginary of replaceability of racialized and gendered workers is a lot older than, than we think of it. And it very much has to do with a history of wanting to secure a cheap labour force that doesn't rebel against the master or the boss. A book that I love is Curtis Mars, his book Farmworker Futurism that talks about this, and he writes about how in the early 20th century US agribusiness fairs always celebrated the technology that they imagined would replace Mexican immigrant farmworkers. But the book shows that rather than replacing these workers automation furthered the need for deskilled workers who would work alongside machinery. And also, of course, there's some kinds of work, manual labour that cannot be automated, certain kinds of picking cannot be fully automated. And, of course, there's also, we can also think about, you know, personal assistants like Siri and Alexa, and how these inherit an imaginary of an ideal assistant based on gendered labour histories, and women who were in secretary and stenographic pools. And, and so it's not surprising that, that there's sort of like a feminine deferential voice that we associate with that kind of personal assistant. So, 21:02 so these are long histories that are shaping how, not just how technology is imagined, but what the design is intended to do.


ELEANOR DRAGE:

One of the things that we really love about your work is that it draws on much loved Black feminist scholars like Hortense Spillers, who mean a lot to us and what we do, because the premise of The Good Robot is that feminist and critical race theory, among other critical areas of scholarship can help make AI less harmful. So we want to ask you specifically, because everyone has their own ideas about this, what does feminist and critical race scholarship mean to you? And how does it inform your work?


NEDA ATANASOSKI:

Yeah, so my training is in in critical race, and feminist studies, rather than in science and technology studies. And so my starting point for thinking about science and technology is that science and technology have always been deeply connected with histories of racial and gendered hierarchy, and oppression. And so there are many ways to think about how critical race and feminist theory intersect with studies of technology, ranging from how technology targets racialized populations. Think, you know, studies of predictive policing, studies of how technology targets migrants at borders, as I mentioned, facial recognition software, also studies of how, you know, digital platforms can enable activism. But my interest in this question really has always been how ideas about technological progress reproduced racialized notions of who is fully human. And so those of us from a tradition of working in critical race theory know very well that humanity is a sliding scale, and people can become fully human, but that humanity can also be taken away. And I think that some of what I was talking about earlier in relation to imaginaries of labour and who is replaceable speak to that question, right. But, but I also think that we can, we can look at drones, right? Drones are technologies, that, in some instances enable us to not talk about human beings being killed, but to, to rather think about certain racialized populations as targets or collateral damage. 24:14 So so that, to me, is is really one of the most interesting questions about technology, because technology is decidedly not human. Another way to approach that question is to think about, you know, the, the conversations around when will when will technologies need rights? What will make technologies have consciousness, right, or, or, or force us to give them rights? So that's a separate kind of question.


ELEANOR DRAGE:

That's really interesting. And we're thinking about the human in technology and how human technology is. I come from a school of thought where I see humanity and technology is co-constituted, but it's really interesting to see how people develop that idea. And another thing that we really like about Surrogate Humanity, your book, which is co authored with Kalindi Vora, is that you offer this provocation that there is no such thing as feminist AI. So, what does this mean for us? You know, do you believe that feminist AI is impossible? And what do we do as a result? How can we move forward from here?


NEDA ATANASOSKI:

So I think when we wrote that epilogue, there was a sort of conversation that we were Kalindi and I were having back and forth about the desire for a feminist AI. And I think to me, the question is, what does it mean for something to be feminist? And then what is intelligence? And so I think that those two questions are really at the heart of what you're asking. So there are many different approaches to, to what is a feminist version of AI or technology. Is, is a feminist AI, one that is trained not to make sexist and racist remarks that it has been trained on by users and trolls? Is it just to improve that? Is it to, um, is a feminist AI one that is programmed by a feminist programmer? Or can there be a feminist AI in a context where programming is, is operating within a certain set of capitalist relations? So, So that, that's one question that I have. The other question is how intelligence is so connected to this post-Enlightenment figure of the fully human. And as we wrote in our epilogue, intelligence has been considered something that is the purview of, sort of white educated elite men, right. So initially, right, it was playing chess that was considered to be that the height of what like an AI could do. So what would it mean to rethink intelligence in relation to what, what would it mean if we thought about intelligence as enacting another kind of relationality? You know, because I agree with you, technology and the human are co-constituted, but these co-constitutions can have very different ends, right? And so do they reiterate and, and rehearse a value like that, what is valued as being kind of, in this instance, intelligence? Or do they enact a completely different kind of relation that can disrupt capitalist relations? So too much of what I've seen as a feminist approach to technology is just corrective. And I think that that doesn't go far enough. It was meant as a provocation. And so it was in a declarative mode. But, But I think that it's an important, I think it's an important provocation.


KERRY MACKERETH:

Absolutely. I mean, I personally love this sort of, you know, very bold provocation that you end your book with. Can we just say thank you so much, again, for appearing on our podcast. It's really been delightful and sparked all kinds of thoughts and conversations that I want to follow up on. So thank you so much.


NEDA ATANASOSKI:

Thank you for having me. I really enjoyed your questions, and I really enjoyed speaking with you. What a wonderful podcast.


29 views0 comments
bottom of page