top of page
Search

Jason Edward Lewis on Indigenous Work in AI

Updated: Feb 20, 2023

In this episode we chat to Professor Jason Edward Lewis, the University Research Chair in Computational Media and the Indigenous Future Imaginary at Concordia University in Montreal. Jason is Cherokee, Hawaiian and Samoan and an expert in indigenous design in AI. He’s the founder of Obx Labs for Experimental Media and the co-director of a number of research groups such as Aboriginal Territories in Cyberspace, Skins Workshops on Aboriginal Storytelling and Video Game Design, and the Initiative for Indigenous Futures. In this episode we discuss how indigenous communities think about what it means for humans and AI to co-exist, why we need to rethink what it means to be an intelligent machine, and why mainstream Western modes of building technology might actually land us with Skynet.


Jason Edward Lewis is a digital media theorist, poet, and software designer. He founded Obx Laboratory for Experimental Media, where he conducts research/creation projects exploring computation as a creative and cultural material. Lewis is deeply committed to developing intriguing new forms of expression by working on conceptual, critical, creative and technical levels simultaneously. He is the University Research Chair in Computational Media and the Indigenous Future Imaginary as well Professor of Computation Arts at Concordia University. Lewis was born and raised in northern California, and currently lives in Montreal. Lewis directs the Initiative for Indigenous Futures, and co-directs the Indigenous Futures Research Centre, the Indigenous Protocol and AI Workshops, the Aboriginal Territories in Cyberspace research network, and the Skins Workshops on Aboriginal Storytelling and Video Game Design


Reading List:


Lewis, J. "From Impoverished Intelligence to Abundant Intelligences", Medium. https://jasonedwardlewis.medium.com/from-impoverished-intelligence-to-abundant-intelligences-90559f718e7f


Lewis, J.; Artista, N.; Pechawis, A.; Kite, S. "Making Kin with the Machines" https://jods.mitpress.mit.edu/pub/lewis-arista-pechawis-kite/release/1


Lewis, J.E. "A Brief (Media) History of the Indigenous Future", Intellect, Volume 27:Number 54 (2016); pp 36-50


Lewis, J.E. and Skawennati "The Future Is Indigenous", Leonardo, Volume 51, Number 4, 2018.


For a full list of Jason's publications, see: https://jasonlewis.org/category/publication/


Transcript


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

Today we’re talking to Professor Jason Lewis, the University Research Chair in Computational Media and the Indigenous Future Imaginary at Concordia University in Montreal. Jason is Cherokee, Hawaiian and Samoan and an expert in indigenous design in AI. He’s the founder of Obx Labs for Experimental Media and the co-director of a number of research groups such as Aboriginal Territories in Cyberspace, Skins Workshops on Aboriginal Storytelling and Video Game Design, and the Initiative for Indigenous Futures. In this episode we discuss how indigenous communities think about what it means for humans and AI to co-exist, why we need to rethink what it means to be an intelligent machine, and why mainstream Western modes of building technology might actually land us with Skynet. We hope you enjoy the show.


KERRY MACKERETH:

Thank you so much for being with us today. Could you introduce yourself and tell us a bit about what you do, and what brings you to the topic of indigenous epistemologies and technology?


JASON EDWARD LEWIS:

I'm Jason Edward Lewis, I am a Professor of Design and Computation Arts at Concordia University in Montreal. And I, in general, look at the intersection of sort of computational media and how it affects the ways in which we speak to each other and the ways in which we interact with each other. I came to this topic of indigenous epistemology and artificial intelligence, really, through, you know, probably 15 years of specifically looking at how indigenous people were using digital technologies to tell our stories, and then sort of in a more general sense, and on a longer timeframe, thinking about how these communication technologies were affecting the ways in which we engage with each other, and the ways in which we understand one another and also understand the world around us. So thinking a lot about how the medium affects the message.


ELEANOR DRAGE:

Fantastic. So we are The Good Robot, that's our provocation. And we ask everyone, what is good technology? What does it look like? And we're thinking about, good for whom? You know, so how are you thinking about good technology, or what good technology might be and how we should work towards it?


JASON EDWARD LEWIS:

So these days, I really see good technology is technology that is responsive to local conditions. That we have benefited mightily from technologies designed to scale, over tens of thousands, and millions and hundreds of millions of people without a doubt. But as we've as we've scaled technology like that, we've overwritten a lot of the particular things that are important to local communities, and we can define local communities in lots of different ways. So when I talk about local communities, I'm really talking about indigenous communities. And sometimes those indigenous communities are on their territory. So it's really easy to see where the idea of their community and where they're at come together. But sometimes there are indigenous communities that are, say, in an urban setting, or their international indigenous communities. So it's me as a, as a Hawaiian, a kānaka maoli person in Montreal, talking to, you know, somebody who is Cree, from Winnipeg, talking to somebody who is Māori, you know, in New Zealand, right, but we do have community, right, there's a commonality of sort of interests and values and things that we're fighting for, so that we can think about what that means on a community level. So for in order to develop good technology, it's really about how do you pay attention to what the people who are using it what they really want to do, and getting away from assumptions about universal users, So figuring out how to take all these wonderful tools that we developed, that it, you know, often get a lot of their power from abstraction in scale. And thinking about how we can counterbalance that with actually paying attention to specific circumstances where the culture and the community that you're part of play a central role.


KERRY MACKERETH:

Your answer kind of brought up was the ideas that you touched on in a piece that you co-authored, which is fantastic, highly recommend to all of our listeners, a piece called Making Kin with the Machines, which thinks about the ways in which indigenous epistemologies are so much better at these kinds of relationality between humans and nonhumans. So I was wondering, for our listeners, would you mind telling them about your work and this piece and why you think indigenous epistemologies are so important in this space?


JASON EDWARD LEWIS:

Absolutely, it's ... I find it super exciting. So “Making Kin with the Machines” is an essay that I co-authored with Noelani Arista, who was a professor of history at the University of Hawaii at Manoa at the time, and she's a Hawaiian language and archives specialists, Archer Pechawis, who is a Cree performance artist based out of Toronto, who has been thinking and making around the concepts of what does it mean to indigenize this digital technology since the early 90s, and then Suzanne Kite who is a PhD students with whom I work, who is Lakota. And she's really interested in how Lakota ways of being and ways of knowing can be expressed in the design of the instruments that she makes. She makes her own instruments for performance, and then also in the performances themselves. So we got together. Long story short is that when Suzanne joined me here at Concordia, to start her PhD studies, we did a couple directed readings that were really digging into indigenous epistemologies as particular as a relationship, they relate to kinship, right, and nonhumans. So what does it mean to structure your ways of knowing and sort of what you look to as knowledge and what you accept as knowledge, to structure that in a way that takes into account nonhumans rights and it takes into account your relationship with nonhumans. And then Noelani who I know from a series of workshops that we've done in Honolulu, approached me about this competition to respond to an essay by Joy Ito, the former former director of the MIT Media Lab, where he was encouraging people to think critically about AI and about these sort of universalizing tendencies and these abstraction extractive tendencies. So so we pulled Archer into that conversation. And we're like, we we have something to say about this. So the essay is structured, on a general level. So making some claims about indigenous epistemologies in general and their ability to incorporate ways of talking about nonhumans in a respectful and generative way. And also acknowledging that these epistemologies are located in specific communities and that they differ, there's commonalities but there's differences too. So the essay itself makes this kind of general argument that then actually drops down into let's talk about Hawaiian approaches towards kinship and nonhumans Cree attitudes, Lakota attitudes, and using examples from the language, from cultural practices, from knowledge practices, to illuminate both where things were different between these three different cultural perspectives, and also where they were the same. And one of the core critiques is that, you know, at least you know, we're operating out of a common language of English. So let's say you say that, like the English framework, coming out of North America and, and England, that the ability to talk rationally, about nonhumans is almost non-existent, meaning that, you know, man has been placed at the height in the centre of our world, coming from this tradition, and everything else is relegated to subordination, and to ‘lesser than’-ness. And that there is no, there's no, there's very little desire, and lots of obstacles to actually looking towards, or looking to nonhumans for knowledge. Because it's assumed that only humans generate and hold knowledge. And so in our language, that data is suppressed, so that anytime we start talking about, you know, the trees, or having a discourse with the fish or something like that, it automatically kind of gets pushed into this, either kind of spiritual register, or this, like kind of kook register, right? It's like, that's not how rational scientists talk. Right? And so it makes it really difficult to see what those relationships might be. Right. And the fact is, and this part of the essay is that those relationships are incredibly diverse. Right? So as Suzanne likes to say, she said, ‘Well, you know, according to my people, all stones are capable of speaking, but only some will speak to you’. Right? And then there's other traditions that are like, ‘well, stones aren't really our relations’, right? ‘We recognise that they're part of our world, but then we don't make relationships with them’. Right. So that's part of what's really important is, you know, as soon as you start talking about moving that way, then the general the mainstream discourse assumes that it's all just the same thing, you're just going to be talking to the fish and hanging out with the trees and it's all going to be... everybody's going to be sort of equal in a sense or everything. But that's not the case, right? These languages in these cultures have an incredibly rich and diverse sort of, like topologies of those relationships and so much of that relationality is dependent upon protocol. Right? So how do you talk to them? Who talks to them? Right? What stories exist in your community about previous conversations with them? Right, and what do you need to understand from those previous conversations before you try to have, you know, a conversation or discourse or relationship with them? So trying to highlight how the language themselves in particular retains that flexibility, that nuance, and also that imperative, right? Is that there are things embedded in the language that lead you to that place. Right? Whereas it would argue that the way that - let's just say scientific English to make it simple, right, operates is it actually very much leads you away from that. Right? It's like this is not a topic for conversation. So, yeah, so making kin with the machines was really our statement of like, first of all, there are important relationships that we have with other beings in this world. Secondly, in the Western tradition, we have lost the ability to engage in that discourse in a meaningful way, particularly when it comes to science and technology. And here are some examples of particular indigenous cultures that have retained that ability in their language, but also in their ontology, epistemology, cosmology, so in their sense of who they are, in their ways of making sense with the world, and how they place themselves in the cosmos.


KERRY MACKERETH

Fantastic. And I really liked as well what you were saying about the ways in which these knowledge systems are portrayed as somehow not rational or they're portrayed as immediately as spiritual and a lot of my own work is in Asian diaspora studies, I think you see this a lot with, for example, Chinese medicine that's either fetishized as this kind of deeply spiritual practice, or it's treated as junk science. Right? There doesn't seem to be any middle ground there for thinking about, you know, can we not have, you know, knowledge systems that aren't exactly the same as you know, Western science, but be taken as sort of robust systems of knowledge in their own?


JASON EDWARD LEWIS:

Yeah, and part of the legacy of the sort of both the Scientific Revolution and the Enlightenment is a displacement of all other knowledge systems, right. Like both of them make very strong claims about … they’re monotheistic claims, right. They're like, this is the only system of belief that is available to you, right. And if you go and dabble in those other systems of belief, you're a heretic. And you're actually unclean and impure, and you don't deserve to be in this system. And part of the reason why we present the argument of making kin with the machines in the way we do is to say, well, you can operate in multiple knowledge systems at the same time, right? Like you can operate like all of us, you know, so two of us on that essay are academics, two of them are artists, I'm an artist as well, but like, we operate in the academic world, as well as these worlds and they operate in the art world and having to navigate all that crazy capitalistness and also operate in, you know, the framework that comes from their, their, you know, their communities.


KERRY MACKERETH:

Absolutely. And something that I really liked that you kind of wrote in your other work was the way in which engaging with these kinds of abundances of knowledge systems also allows us to think about intelligence differently. So going from a model of thinking about impoverished intelligence to abundant intelligences. So again, could you describe a bit for our listeners, what you mean by thinking through that kind of transition?


JASON EDWARD LEWIS:

So from impoverished intelligence to abundant intelligence is an attempt on my part, to try to identify in a concise way, what I thought was very problematic about the current mainstream research and development around artificial intelligence. And that is that it has arguably collapsed to a very narrow approach to the challenge of intelligence. So machine learning has sort of eaten the AI worlds and made it so that that has been identified with artificial intelligence, even though practitioners - machine learning practitioners - will, when they actually have to sit down and write a scientific paper, will make it very clear, right, that this is just one approach. However, when they're talking to the press, or trying to get money, that distinction goes out the window, right, because people are excited about funding AI, but they're not excited about statistics, right, which is basically what machine learning is. the definition they are operating on is incredibly narrow, it does not take into account so many different ways in which we behave intelligently in the world. And this is, I mean, I am not the first person to make this argument by far. Like if you go back to, I think, the 80s when Howard Gardner published Frames of Mind: The Theory of Multiple Intelligences. Right, some of the conversations around that time were really arguing about different kinds of intelligences and how we need to understand them and how they operate differently. And then thinking about how do we, how do we formalise that and then make it computable so we can embed it in machines? You know, there's lots of discussion around that. But most of those got pushed aside with machine learning, what, 15 years ago, sort of really taking off when they had the right, basically the right equipment to make it really feasible. And it seems just so dangerous. Right? It's like, okay, yeah, there's a lot of interesting and useful things that can come out of that as a model. Right, but it's not the only way to behave intelligently in the world. Right. And, you know, coming at it from, you know, the perspective of the indigenous people that I work with, from my own community, which I'm really just at the beginning stages of really learning what that means anyways, you know, it's not that it's not that indigenous people have some lock on the right way to do this. Right? It's just that it makes … it's easier to see, right, it's, it's easier to understand how the the multiple things that go into, for instance, me being considered an intelligent member of the Hawaiian community, is not necessarily the same thing as what does it mean to be an intelligent member of say, American society? Right? That there are different measurements about what it means to be intelligent? What does it mean to act in the world in a way that contributes not only to your own well being, but to the wellbeing of the community around you and to the territory on which you find yourself? And none of these models? None of that sort of standard AI models are capable of dealing with that kind of complexity.


ELEANOR DRAGE:

I was reading this morning [Alan] Turing's “Computing Machinery and Intelligence” again and the things that struck me was that people … some of the arguments levied against his claims of a possible thinking machine, were that it would be impossible for a machine to enjoy strawberries and cream. And it just struck me how British an experience that is anyway, you know, sitting at Wimbledon enjoying your fruit and lactose, and that this actually influences the kinds of machines that we build in a way that we can build affinities between machines. Not only does Turing, a product of his time, say, Well, you know, what people actually want is an affinity with the machine, then we can identify it as a thinking machine. I wanted to pivot a little bit and talk about another concept that people have defined very differently, like intelligence, which is ‘bias’. And of course, there are mathematical definitions of bias. And what's fascinating is that it's used in AI ethics quite heavily, it is sort of the go-to word for describing why AI might be harmful. And you've said, like, you know, Kerry, and I talk a lot about this, but you've said that maybe it's not the right word, for thinking about harms and AI. And you say that “the bias in these systems is not a bug but rather a feature of an interlocking set of knowledge systems, designed over centuries to benefit white men first and foremost”. And we think this is a really crucial point. So can you explain a bit to our listeners what you mean by this?


JASON EDWARD LEWIS:

It's really difficult, I think, to grasp how enmeshed we are in a whole series of intellectual structures that have been accumulating for a very long time. Right. So it's, you know, for instance, it's difficult, it can be really difficult to see, you know ... one of my, my favourite examples is like, you know, in the States anyways, but I know this happens in other places, or North America, you know, you can you can see how, you know, fairly predictably, like, what side of town, the non-white people end up on, right? But if you're just in that town, and that's all you ever see, you're just kind of like, Oh, that's kind of weird. I wonder why that happened. And there's little stories that are told locally about why that happened. But when you sort of pull out, and you look out over across the country, and you start to see the pattern, and then you start digging into the history, and you realise it's a crazy morass of legislation, custom migration patterns, conflict, all these different things that have kind of come together to resolve in this kind of repeatable pattern, then I think, I would hope, that you would be like, okay, there's something deeper going on here. And what is the deeper thing that's happening? This just isn't just a series of personal choices made by the people who live in those different parts of town. And it's the same thing with our computational systems, right? So you know we've defined bias in a way that's amenable to computation. Right, so we've defined bias in a way that it's dealt with mathematically. Right? And that it can be corrected mathematically, right? But that's not actually bias in the real world. Right? So the idea is that the bias is in there because, you know, you've you've sort of introduce something into your sampling, or, or if something is in your sampling, or you've introduced in your sampling, there's something that's in your data, right, that that it is sort of like kind of skewing the proper way that things should function. But really, what that misses is the fact that the thing can be the system can be working properly from a mathematical standpoint, from a computational standpoint, and still be horribly, what's considered still be horribly, biased in its consequences. if you look at the math, and you look at the algorithm, and you look at the data, and you're like, those are all like, everything's nice and clean and perfect there, right? So there can't be bias - then you're looking at the wrong level of the problem. And just because you're an engineer, doesn't mean that you get excused from not going up a level to try to understand the systematic problems with the way that you're approaching things. I would argue that a bias in the system is that we think it's okay to systematically survey people. Right, particularly a problem in the UK. Right, like somewhere along the line, it was deemed totally acceptable to put up cameras, CCTVs everywhere, hook them together into central systems, and set up something where you can survey an entire town. Right? So that's not bias necessarily in the cameras, or in the facial recognition software that's in the camera. That's a bias in the systems above that says that this is an acceptable way to run your society. So that's part of the argument that we try to make is, it's like, this isn't just about bias against brown people. Right? This isn't just about how this technology discriminates against, you know, trans people, right? It's actually about you making good software. Right? You actually be in a good engineer, right? Because actually, right now, you're being really crappy engineers.


ELEANOR DRAGE:

Yeah, exactly, kind of breaking that association between inaccuracy and bias, as if bias was something that could be just corrected. We had a really great chat with N. Katherine Hayles. And she was talking about also the relationships that she developed with engineers, and, and often how enthusiastic data scientists and, and engineers working in AI, we're to be confronted with some of these questions. And that's something wonderful that you're doing with the end of the interdisciplinary research grant, with the indigenous working group is that you bring together people from lots of different backgrounds and the humanities and the social sciences. So you've got engineers and artists working together to rethink AI, which is, I think, such an exciting thing for people from all disciplines, to come together and realise that, you know, your work can be so much more than just programming instructions, you know, it really is conceptual, and kind of recognising the conceptual aspect of AI as worldbuilding. So I'd love to hear in these final moments a bit more about that. Could you tell us about the indigenous working group and the challenges and successes of working with lots of different people with very different experiences and backgrounds who have many different things they want to achieve in this space?


JASON EDWARD LEWIS:

Absolutely, first, I just want to say that Katherine is one of my heroes. I think she's been one of the most perceptive, you know, sort of critical observers of how we're actually using … developing and using technology, you know, for, you know, decades at this point. And I think that that's one of the things that's important to remember is that, you know, none of the criticisms that are coming out right now directed at AI are new, right? So sociologists, and linguists, and other social scientists have been talking about the ways in which our computational systems are a reflection of our value systems, you know, for quite a long time now. And it's really too bad that, you know, we have to be sort of repeating those critiques, you know, with this latest wave of technology. But one of the things I was lucky to learn very, as a, just a little newbie researcher, so in my early 20s, working at a place called the Institute for Research on Learning, which was a really intensely interdisciplinary place very unusual for the time where we had linguists, sociologists, physicists, engineers, writers, a huge range of disciplines present in the in the Institute, and we would get together once a week, and we would look at like five minutes of videotape of, say, air traffic controllers doing their job, or insurance adjusters doing their job, you know, using computer systems to try to understand what was going on, and spend an entire afternoon on just that five minutes, sometimes just 30 seconds, with everybody talking about what they saw. So the linguist talking about the language that was being used, the anthropologist sort of looking at the culture of the air traffic control booth, the engineers talking about the actual systems that they were using, and realising how powerful and exciting it was to be able to get access to those contextual frames for understanding the reality that was in front of you, and understanding how that reality really shifted as you change frame. And also, just being around a group of people who, you know, came from these very, like academically distant points, who were generous and kind and excited about sharing those frames of reference, I think taught me really early about the value of trying to create those spaces where those conversations can be had and people feel comfortable having them. And so it's something I've tried to take with me, you know, as I move through the career, my career and in particular around the digital protocol and artificial intelligence workshops, it was really clear to Noelani, Suzanne, Archer and I, when we were writing an essay that we needed to get other people involved in this conversation. That, both on the community side and that ... always this, I think, obligation. An obligation is kind of too hard of a word because it's something that's taken on willingly, but an obligation to be in conversation with our communities, not to take the position of like, Oh, you know, I just laughed thinking about it, right? I'm going to represent, you know, Hawaiian perspectives, right, or Archer being like, I'm going to represent Cree perspectives, like, No, no, right? You know, he's a Cree person that's representing his particular experience as a Cree person. And what he knows about Cree language and culture, you know, in this conversation, as well as what he knows, as somebody that's been building his own digital performance technology for three decades, right. And so how do we … the question was, like, how do we bring more diverse voices into this question of what we want to do with AI? And, and that's what we did with the workshops and that was the position paper that came out of that. And now, you know, the challenge is okay so how do we actually make working groups that actually bring together … for instance, we did a very, very prototype workshop at MIT, last fall with Scott Benesiinaabandan, who's one of the contributors to the “Indigenous Protocol and AI” position paper and the working group, and a professor of computer science at MIT called Jim Glass who does work on computational linguistics. And Alan Corbiere, who is an Anishinaabe language keeper. Where the model there was like, how do we bring indigenous knowledge practices into the conversation with, let's just say, AI knowledge practices, to make it simple at the moment, so that we might be able to find interesting ways in which these knowledge practices can work productively together. Right. And then my main, my main goal, my main priority is for the benefit of indigenous communities. But like I said earlier, I also think there will be a benefit for the other side of that equation, right, meaning the technical field, right? In terms of making better systems, right? And we can only do that if we get these multiple voices in the room together. Right? We can't do that by having the technical people make systems, you know, and then, you know, if you're lucky, get some feedback from the people who are using it, so they can tweak it around the edges. Right. And then on our side, you know, we want … I mean, most of the people that I talk to, we want to take advantage of these technologies. It's not a, you know, it's not a Luddite argument, that's not an argument about, oh, we need to, like just put these technologies aside, right? It's an argument about, these are incredibly powerful tools, as indigenous people, we've survived by adapting to new technologies, as well as developing our own new technologies. So how do we adapt this technology to better suit us for now? And then over the longer term, how do we build the capacity within our communities to create this technology on our own so we can build systems from the ground up that reflect the ways in which we want to have … we want to create intelligent entities in the world, to help us, to work with us, to play with us, to create with us, you know, to show us ways of understanding and looking at the world that we we have little or no access to. one of the interesting conversations I had was with somebody, sort of a knowledge keeper, who was talking about how the ability ... you know, so one of the things that machine learning and big data is really good at right is seeing these patterns that we just can't … they're either too small, right, they happen to be too small of a scale or too large of a scale for us as humans to really apprehend right, we can't see them in the first place. And even if we might be able to see them, we can't actually really manipulate them to really figure out what they mean. Right? That's what big data machine learning is really, really great at. You know, he was like, he was like, you know, this isn't any different than like having a vision. Right. It's not any different than having, you know, an experience. That you can describe in different ways that allows you to sort of apprehend patterns in the flow of life around you that you can't see in your everyday life. Right? He's like, yeah, it's done really differently. He said, But what's exciting is those machines are going to see things that I can't see. Right? And, and those, those are part of our reality, what they see are part of our reality, too. And so we should be excited about having another set of views on reality, you know, and because reality is a complicated place, and the more we learn, the more complicated it becomes, right? It was supposed to get simpler, right? We were supposed to figure it all out by now. Right? Supposed to have figured out mathematics, supposed to figured out physics, like it was all converging, we're all gonna have a unified model, right? And then it was like, Oh, crap. Right? It's like, Oh, well, we actually can't have a, you know, a complete mathematical system where we can prove everything in that system is true from within that system. Oops. Can't do that. Physics is like, Ah, you guys thought you were talking about matter? Actually, really, you're talking about energy. Right. And we actually really don't understand how the energy is working. You know, and it's funny, because, you know, you kind of get the sense of the frustration, for instance, in both of those communities around the fact that everything can't be unified. And I'm like, that's glorious! Right? Um, it's not like it hasn't kept us from making useful stuff. Right, creating good technology. You know, but it's kind of glorious that the world is that messy. And the more perspectives that we can bring to the question of understanding our world, you know, the better off we'll be, you know, the more of the world we'll be able to grasp, and the more of human experience, we'll be able to honour in how we build our technologies and what our technologies respond to. And you got to have, you know, you got to have a diverse set of people in the room to do that. It's just, there's just no other way to do it. there was a great talk, I was part of a panel in Greece for this Humanity and AI conference. And I forgot who said it now, so forgive me, whoever said it on my panel. But she was talking about how she was like, ‘you know, I just can't imagine that if the technology around, for instance, autonomous vehicles arose someplace somehow in the Global South, that the first thing that people thought to do would be to figure out self driving cars. Right, as opposed to figuring out, you know, vehicles for moving large numbers of people’. You know, like, it's those sorts of things where, you know, you just have to scratch your head and be like, oh, okay, so everybody, the engineers, the funders, they all thought this is like the most exciting thing they could do with incredible brain power and the incredible amount of money that they have at their disposal. You know, and it's like, wow, okay, we need that, you know, as a slogan rose up after the rocket ships went up in this space, it's like, we need better billionaires. Right? Like, you know, I don't necessarily have a problem myself with billionaires, right? But you know, God, they're really uncreative. They really lack imagination. You know, they're just fulfilling a Western science fiction fantasy that's, you know, 100-150 years old and it's so boring.


ELEANOR DRAGE:

Well, they're all the same books. I mean, this is kind of the interesting thing. And Kerry and I do some work with narratives around AI, so how people imagine AI, and, you know, we would love people to also read different books. My PhD was in science fiction written by women in Europe. And the first question I get is, oh, women write science fiction? And you’re like, of course! Yes so much! But also in other parts of the world there’s amazing science fiction that may not be identifiable as science fiction within the Western canon because it doesn't pertain to those same tropes. And I know that there's a lot of, like incredible discussions around indigenous science fiction or indigenous speculative fiction. Well, we have come to the end of our time, and it's been incredibly interesting to hear you speak. So thank you so much for joining us. It's a real pleasure. And I look forward to seeing you very soon, hopefully, in person at some point.


JASON EDWARD LEWIS:

This has been a real pleasure for me, Kerry and Eleanor, and I really appreciate you being willing to listen to me rant for an hour. And I think you guys are doing great work with the good robot podcasts. So look forward to more. Thank you so much.





44 views0 comments
bottom of page