top of page
Search

Kate Chandler on Drone Warfare and Undoing Everyday Militarism

Dr. Katherine Chandler's research examines the intersection of technology, media and politics through a range of scales and forms. Her first monograph, Unmanning: How Humans, Machines and Media Perform Drone Warfare, studies unmanned aircraft from 1936 - 1992. She asks how life and death are adjudicated through conditions organized as if control were ''unmanned.'' Her most recent work studies how socio-politics are entangled with everyday media and technologies. This includes PowerPoint, email and drone aircraft deployed for commercial, humanitarian and medical purposes. She received her Ph.D. from the Department of Rhetoric at the University of California, Berkeley with a Designated Emphasis in New Media. Her work has been published in Interventions: International Journal of Postcolonial Studies; Humanity: International Journal of Human Rights, Humanitarianism and Development; Catalyst: Feminism, Theory, Technoscience and qui parle: Critical Humanities and Social Science. Her second project, “Drone Publics,” is funded through Georgetown University’s competitive pilot grant program.


In this episode, we talk to Dr. Kate Chandler, Assistant Professor at Georgetown and a specialist on drone warfare. We recorded this interview the day that Russia invaded Ukraine, which reminded us of just how urgent a task it is to rethink the relationship between tech innovation and warfare. As Kate explains, drones are more than just tools, they’re also intimately tied to political, economic and social systems. In this episode we discuss the historical development of drones - a history which is both commercial and military - and then explore a better future for these kinds of technologies, one where AI innovation money comes from nonviolent sources, and AI can be used for the prevention of violence.


READING LIST


Adam A (1998) Artificial Knowing: Gender and the Thinking Machine, New York: Routledge.

Bates D (2013) "Cartesian Robotics," Representations 124.

Caplan K and Parks L (2017) Life in the Age of Drone Warfare, Durham: Duke University Press.

Chandler K (2020) Unmanning: How Humans, Machines and Media Perform Drone Warfare, New Brunswick: Rutgers University Press.


Daston L (1994) "Enlightenment Calculations" Critical Inquiry 21(1).


Daston L (2017) "Calculation and the Division of Labor" 31st Annual Lecture of the German Historical Institute


Haraway D (1991) Simians, Cyborgs, and Women: The Reinvention of Nature, New York: Routledge.


TRANSCRIPT:


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

Today we’re talking to Dr. Kate Chandler, Assistant Professor at Georgetown and a specialist on drone warfare. We recorded this interview the day that Russia invaded Ukraine, which reminded us of just how urgent a task it is to rethink the relationship between tech innovation and warfare. Kate Chandler is a specialist in drones, and as she explains, they are more than just tools, they’re also intimately tied to political, economic and social systems. In this episode we discuss the historical development of drones - a history which is both commercial and military - and then explore a better future for these kinds of technologies, one where AI innovation money comes from nonviolent sources, and AI can be used for the prevention of violence.


KERRY MACKERETH:

So thank you so much for joining us here today. It's really such an honour to speak with you. So first just to kick us off, could you tell us a bit about who you are? What do you do, and what's brought you to the study of gender, feminism and technology.


KATE CHANDLER:

So hi, I want to thank both Kerry and Eleanor for inviting me to be on the good robot, it's really exciting to have the chance to discuss my research with you on this podcast. So my name is Kate Chandler. I am an Assistant Professor in the Culture and Politics programme at the School of Foreign Service at Georgetown University. So my work addresses feminism, media, and technology in a transnational context, particularly how it relates to to and transforms international relations on all kinds of different scales, ranging from the local to the international. But I got here, by way of many different interdisciplinary fields, and which include an undergraduate degree in sociology, a degree in photography and cultural analysis, and my PhD in Rhetoric from the University of California, Berkeley. And in these various different projects I have long been interested in science and technology studies and feminism, and particularly the ways in which technology and the making of technology is a site for understanding the political in the broadest sense of the term, which of course, includes subjectivities, which are not just gender, but also race, sexuality, ability, and age. So our being political is something that is deeply interconnected with technological systems. So that is a question that has long interested me as well as the ways in which that is understood in different ways across the globe, and how resources and understandings of the technological are not distributed equally.


ELEANOR DRAGE:

We think, then, that you're in an excellent position to respond to our billion dollar questions. So what is good technology? Is it even possible? And how can feminism help us work towards it?


KATE CHANDLER:

I love this question. And it's something that I have long grappled and sort of thought about. And I think my simple answer is that the adjectives good and bad should not be applied to technology. So I think when we are thinking about the ethical stakes of technologies, I want to expand what a technology is to connect it to social, political, economic, cultural relationships. And this, again, is a longstanding idea that lots of feminist scholars have talked about starting, you know… one of the key scholars for me is Donna Haraway and her idea of the cyborg, right, so I think the category, if we're just trying to think about a technology as being good, it's never going to work out for us. Because we're only thinking about the technology, we're not thinking about the forms of relationality and the connections that it's creating. And so I think there's a new understanding of technology that feminism really helps us to get through its understanding of relations and thinking about relations as a as part of technological systems. So as opposed to technology, being a tool, or something that's separate from human relations, or separate from politics or separate from economics, instead it’s a site for all of these things to happen. And I think if we are going to use technology in beneficial ways, one of the advantages that may be proposed by technology is we may be able to better look at the ways in which existing inequalities, existing global problems, existing local problems, existing relationships become embedded in technological relationships. And that may give us a better critical lens to think about how they might be dissected and transformed and how we might create relations that are more equitable, more just, that allow us to be the forms of subjects that we want to be, to better understand our relationships with others. And I think that comes not from a technology, I think that comes from a lens and a perspective that we bring to the technology and I think it comes through careful consideration of what kind of relationships are being embedded in the technology and how we can transform them.


KERRY MACKERETH:

Fantastic, thank you. There's such a thoughtful and interesting answer, and this certainly resonates with what a lot of other feminist scholars on this podcast have though about from Katherine Hayles, his work as a nonhuman cognizes through to Jason Lewis's work on indigenous approaches and work in AI thinking through these different forms of relationality that are so central to this question of, you know, as you say, good technology. And so I want to pivot now to think about some of your other work, which focuses on the history of drones. And in our work for context, Eleanor and I work with a major tech multinational to think about feminist approaches to AI ethics and practice. In our work, one of the challenges we experienced is this massive hype around AI, that AI is such a big corporate and industry buzzword right now that this can make it actually really hard to implement AI well. And so one of the specific kinds of public preconceptions about AI is that it’s a really new, unprecedented form of technology. So could you share a bit more about your own research on the history of drone warfare? And how in this context, the framing of AI as this new phenomenon is potentially quite misleading?


KATE CHANDLER:

That's a great question. And I'm really excited to talk about my own research. But I want to also just gesture to the ways in which many feminist historians of science and technology have also talked about these questions. I'm thinking specifically of the work of Lorraine Daston who does an amazing job of situating, the entire study of computation in questions of gender politics, and I think her analysis of computation is really useful for our own sort of thinking about artificial intelligence, reminding us that this idea of the calculator, and a system of calculation is not something that emerged in the 20th century, it emerged in the 19th century and thinking about it in the context of the 19th century, ties it into various different democratic social and economic movements, namely, the rise of the nation state, right. And my own work was inspired by the work of David Bates, who is another intellectual historian who's done a lot of work trying to take the figure of the robot and put it back in early to Cartesian thinking. So one of the things that Bates describes in his research is the ways in which Descartes in terms of thinking about who is human uses this figure of the automaton. And I'm pointing this out again, that we have all of these historical antecedents of ideas of automation and machines, which of course go back long before even the modern period. And we can find remnants of these in various different classical texts, if we think about the Western tradition, and other scholars who are working in other parts of the globe have also sort of connected the idea of automation to various different lineages, right. So I think there's a lot of material here to remind us that the idea of a thinking nonhuman is something that is deeply embedded in lots of different intellectual traditions and something that has been thought about and proposed for a really long time. And of course, carries with it all kinds of really problematic ideas. And again, the historical examples often aren't the best places to go look at this, because we can see them in really blatant ways that our own our own racism and sexism, ableism, ageism, heterosexual folks, it focuses right in when they're a part of our contemporary discourse, sometimes it's harder for all of us to see, you look back and you just think about the word robot and where it emerges from the idea of the Czech word for slave, we see really clearly how ideas of inequality and how concerns about unequal relationships between humans are built into ideas and notions of automation. And in my own work on drones, I'm really interested in ideas of automation associated with war, and the problems involved in imagining nonhuman wars. And as I mentioned earlier, we're recording this on February 24th, and this morning, we all woke up to news of the attack by Russia against Ukraine, right. And again, this sort of notion of war and technologies of war, particularly in the 20th century, has been something that is being imagined as a non-human process or something that can be automated. And we're all concerned about the sort of systems of automation, not just associated with artificial intelligence, but of course, with nuclear weapons as well. And so, my research looks at the early development of drone technologies starting in the 1930s. And, again, we think about drones as an a contemporary technology that emerges in the war on terror. But the word itself was the code name to describe an automated pilotless remote controlled aircraft that was developed by the Navy. And this system is associated with early guided missile systems as well as ideas about autonomous weapon systems, which were also a part of early artificial intelligence in the 1960s. And so there's a lot of really amazing research. I'm thinking of Paul Edwards The Closed World that describes the invocation of the 1960s, artificial intelligence researchers and the US Military Industrial Complex and the development of the Sage missile system, which was supposed to be an “automated response” to nuclear war should it happen. And of course, none of these systems are ever automated, right? And and I point this out, because it's easy for us to see now that the hype of the sage missile as a totally automated system, was something that was orchestrated by human relations, political relations, the Cold War, there was no non-human system that was separate from the political, economic and social systems that were making it possible. And I think it's really important that we recognise that with artificial intelligence today, there is no artificial intelligence that is sort of out there thinking doing things yet, without us acting on it, organising it, paying for it, creating rationales for it, deploying it in various locations, training it in various different ways. All of these things are deeply embedded in sociopolitical and economic structures. This does not mean that individuals are controlling it, right. Because there are all kinds of social, political and economic relationships that then shape it in various different ways in ways that human actors sometimes don't feel like they have control over it. I mean, I think it's really interesting to talk to engineers who are charged with making and developing these systems and the limitations that they feel. So I think it's really important to recognise that these are embedded in much broader structures, which include politics, but also include cultural relations that we are all part of. And I really want people to think about the ways in which we can transform how we talk about drones and AI, for example, if we just start acknowledging that drones in AI are not agents, right, they are only agential to the extent that they are connected to humans. And again, they are relational systems, and we need to think about them in these relational forms. Just going briefly back to one of the key things in my research, this drone system is very purposefully described as an insect. And I think it's useful again in feminist scholarship to think about the ways in which the human and the nonhuman the gender binaries, racial binaries, other binaries, sort of fit with this human non-human discourse. But what's interesting to me about the drone is that it was also specifically imagined as an insect, right? So we think of human, machine, and animal as sort of being separate entities, but they're actually all being imagined as particular kinds of entities that are motivated in various different ways. And I think this cybernetic question of what is it that’s driving a system is really, really important to consider and again, shows how a lot of politics is playing out in this. So the drone - the name was chosen because it was supposed to be an impotent, easily controlled system. Right, and drone was invented, the name was used to designate not this sort of autonomous threat, but rather something that could be easily controlled, which is also a deeply gendered notion of how we think about technology. So I'm sure many other people on your podcast have talked about the ways often when gender is put at when technology is described in gendered terms, if it is feminised that is used to indicate how the technologies can be controlled. And so this term drone, which now means something really different to us, it gives us this idea of inhuman threat, of dehumanisation, all of these other senses of particularly associated with targeted killing and the US war on terror, in its roots, it was imagined as a system that could be controlled because people are really worried about autonomous systems going out of control. And I think this tension between what is in control and what is out of control is something really important to consider, especially again in the field of engineering where so much is trying to be done to treat technology as a tool to be controlled. And again, I think what is useful about the relational idea of this is we can see how our ideas or our notions of what we want to have happen with a technology are limited, right? And we have to think about the ways in which the technology may also always be out of control. And that's something that I'm really concerned with, with contemporary AI systems as well.


KERRY MACKERETH:

That's absolutely fascinating. It's such a rich answer. And it's really exciting to hear you talk about this. Because in many ways, I feel like my own research interests are approaching a very similar question, these kinds of fringes of the human and the nonhuman and how that's envisioned and technology. But I'm thinking almost from kind of the opposite side, which is thinking about how this idea of the automaton has been used to racialize particularly members of the Asian diaspora. And so I work at an Asian American and Asian Diaspora Studies. And we're also having on the podcast soon, Michelle Huang, who's done this amazing art piece on inhuman figures, on this exact topic. And also for our listeners, if you want to check out our previous episodes, we have one with an unflinching, who also looks at the idea of fundamentalism, or sort of how Asiatic femininity is produced in relation to the machinery and the mechanical. And I wonder to move to another area of kind of overlap. I guess, in our work, which is, you know, some of my work on Asiatic racialization, thinks through the geopolitics of the so called AI arms race. And one of the interesting things about this problematic framing of AI development as an arms race is that unlike other forms of technology, like nuclear weapons, for example, AI doesn't really necessarily work being defined as a weapon in the conventional sense, since these military and commercial applications of AI often involve the exact same technologies and processes, and they're not easily distinguishable from one another. And I know this is a really big focus of your work. So could you talk to us a bit about how different technologies from drones through the PowerPoint are deployed across these multiple domains from military to humanitarian arenas, and in doing so how they contribute to the militarization of everyday life.


KATE CHANDLER: KATE CHANDLER: KATE CHANDLER:

So I, so let me just say, I think Kerry's work is absolutely wonderful and amazing, and this is a really exciting research area. And also, I want to underscore the sort of significance of intersectional feminism here and that the ways in which we talk about gender binaries, or human non human binaries always contain elements of race, as well, and of course, also ability and sexuality. And it’s really important to have a rich, diverse range of researchers in the room who are studying these various different objects. And I think there's so much that can be taken out of that I really, as a scholar, there's often this push to study what is new and and I wish we would spend more time studying what has already been studied, because I think there's a lot to unpack about every single object that we're thinking about. And I just want to briefly share two anecdotes about my previous research that may help to sort of think about this discussion of the contemporary AI arms race as it's being framed and the limitations of understanding this as an arms race, which I think is a really important framing. And so the first one which ties into this question of racialization, as well, is the role of television in the development of early drone aircraft. So the first drones, as I mentioned, were understood as insect-like systems that were supposed to be controlled. And they quickly began to think about these insect like systems as weapons that could be used to attack particularly in the build up to World War Two. And these aircraft were explicitly racialized. So the missiles that were developed from the drone were understood as “American kamikazes”. And so here we see how the racialized Asian figure is used by the American military to describe the weapon that they have developed to attack Japan. And to, to turn the drone that was basically a remote controlled aircraft into a flying torpedo, the idea was to integrate early television into it, and the engineer who did this was Vladimir Zworykin and who those of you who are familiar with media history will know as the inventor of RCA television. And there was a huge contract between the Navy and RCA in order to develop television systems for all of these drones. Now, this drone aircraft and the missile project associated with it was considered a colossal failure at the end of World War Two, it was completely cancelled, Vannevar Bush, who many of you may know was considered the sort of czar of scientific research for the United States and instrumental in all of the Manhattan Project, and other atomic developments really dismissed the idea of drones as sort of a ridiculous concept. And so what happened is that technology essentially provided the groundwork for the development of domestic television sets. And those of you again, who are familiar with media history will know that the 1950s is the golden age of television, the piece that sort of made Domestic Television possible was ultimately developed for the drone that was known as the American Kamikaze. So this racialization we already have in our American television sets and the history of television were already happening. And I think this is really should cause us to pause and not just think about this as like an so I agree, absolutely. We carry I think the new ways in which this are happening with AI is unprecedented in particular, because the way in which I've described the innovation historically is that it started in the military, and it goes into commercial systems. And what I see happening with AI is actually the developments are happening in commercial systems, and then they're being adopted by the military. So I think there is a real transformation. But this interconnection, and this interplay between what becomes totally ordinary, everyday life, i.e. watching television, and what is a part of the military industrial complex, is really important to sort of parse out. And I think, I think, again, this suggests to us more potential avenues for change and transformation. So I don't want you to read me as, Oh, the military has made all the innovations of the 20th century possible. I don't believe that. But I think we need to pay better attention to things that are happening in the military sphere, and the sort of interpenetration between the military sphere and the commercial sphere. And the ways in which that continues to reproduce inequities, which extend all the way from wartime violences to the understandings of different subjectivities as unequal and within various different hierarchies. And so, I really want us to do that work that sort of connects these granular, everyday scales all the way up to all the way up to geopolitical scales. And, and, you know, one of the ways I've been also thinking about this is analysing a lot of the contemporary work on drone warfare in the United States, one of the things I noticed is that much of this material is presented to us in the very boring PowerPoint format. And one provocation that I have for my students and for my colleagues and for the people that I work with is think it's very easy to say that this sort of dehumanising hunting, optics of drone warfare are sort of making it much easier to kill people at a distance, and are transforming the stakes of warfare. But what if we say the same thing about PowerPoint? And what if we think about the ways in which our very ordinary ways of interacting through screens and media also produce distance and also abstract our relationships to others, and also reproduce systems of bureaucratic power, and rewrite the existing logics of how geopolitics happens? So that's one of the ways I've been thinking about this.


ELEANOR DRAGE:

Thank you, that's fantastic. And that really has a nice and very important connection then to be made here between military tech development, which is what you're talking about, and how the ideas that underlie military technology, that kind of the racialization of military technology, then feeds into commercial systems like the television, and media histories are fascinating. So thank you very much for bringing that up. You have looked into the importance of tracking the funding, you know, where is funding an AI coming from, and it's something that Kerry and I consider a lot we're lucky to be funded, pretty innocuously by people we think are doing good work in other domains. And we take funding very seriously. I've always been fascinated by the fact that even in 1987, MIT pulled in over $407 million in defence contracts. I mean, it's just astonishing. And this continues today. And I know you've said that not all AI has been funded by the military, obviously, but can you just tell us in two minutes or so why is it so important to trace the money?


KATE CHANDLER:

So I think it's really important to trace the money to understand the power that is being enacted through technological systems and money as a form of relationality. Right. So we've been talking about feminist forms of relationality, we can talk about economic forms of relationality, which are necessarily based on exploitation. Right. And so money is fundamentally a value that says what we need to do is make profit. And we can see that war has been a major source of profit throughout the 20th century for very particular actors, which are mostly white men, right. And I think we really need to call this out and think about the ways in which the billions of dollars in defence contracts associated with artificial intelligence, are not going to produce a neutral artificial intelligence, they are going to produce an artificial intelligence that is designed for war. If we want an artificial intelligence that does different things, it needs to come from different funding sources, right? And maybe it needs to be not funded at all right? Well, maybe it needs to really draw on some of the earlier ideas of the internet, which were not based upon the sort of development of the Internet as a tool for the expansion of capital. So a few small numbers, one of the continuities between the Trump and Biden administration is a massive new investment in artificial intelligence by the Department of Defence. And it's estimated right that the proposed budget for military artificial intelligence this year in the United States is $5 billion. And then much of that money will go both to commercial companies and to academic institutions. And I think we, as academics at academic institutions, can raise questions about how this money is being used and the sources of money that we’re utilising or in order to continue to promote education. And again, that these have values associated with them. And we might challenge the values that they contain and think about what other values we might want to promote.

ELEANOR DRAGE:

Absolutely, which is what you're doing by looking at how feminist ways of imagining the future of certain technologies can challenge power imbalances in AI. So we heard you read a wonderful fictional story offering a different kind of feminist imaginary of drones. So can you tell us more about that?


KATE CHANDLER:

So I really think that bringing feminism to the front of thinking about military technologies is a really useful exercise, again, drawing on some of the long standing traditions of gender peace and security and the ways in which feminist advocates have sort of thought to rethink war. And I do want to quickly know that these these feminist advocates have historically been really criticised for sort of prioritising particularly Euro American perspective. So I think having a strong sense of re situating these narratives within a global context and including, and for grounding perspectives from the global south is really important. But just to think about this idea that what if the basis of AI is not an arms race, right? What if AI is developed to prevent violence from happening? What if AI is used to advocate on behalf of preventing people from being targeted? I think there's an easy switch of the current optic, which is trying to turn subjects into targets. And we could say, what if it simply did the reverse? Right, and said these should not be targets. What's the anti target? And that to me seems the seems to promote these forms of collectivity engagement. And thinking about connections between places as opposed to competitions between places, and doing the narrative that actually technology is a race at all right? There would be a different set of language that we use to describe innovation, which maybe again, wouldn't even use the word innovation that would be much slower, that would be engaged with the forms of connection that we want to have, and really work to think about. Eleanor mentioned that she worked on utopian ideas - trying to think about the utopian potentials, even as we know that there's always challenges associated with these utopian ideas. But I think providing different imaginaries for thinking about what the technology is, is really important, and especially in the context of geopolitics and the sort of major tech industries. I think there's a lot of different imaginaries that come from all kinds of different people who have a range of positions and perspectives that are just not normally included in these conversations. And I think so much can be done by expanding the conversation incorporate needing more people and really trying to upend, how we fund this and and how we think about the sort of future direction of AI and in a way that that really imagines a different future, one that we would all want to live in not a sort of apocalyptic machine led world.


KERRY MACKERETH:

Fantastic. Thank you so much. And this time is really whizzed by because it's been such a fascinating conversation. So thank you so much again for joining us. It really is a privilege to hear about the amazing and wide ranging work you've been doing and we hope to chat to you again soon.

KATE CHANDLER:

Thank you so much.


ELEANOR DRAGE:

This episode was made possible thanks to our generous funder, Christina Gaw. It was written and produced by Dr Eleanor Drage and Dr Kerry Mackereth, and edited by Laura Samulionyte.





36 views0 comments
bottom of page