top of page
Search

Wendy Hui Kyong Chun on Facebook ‘Friendship’ and Predicting the Future

In this episode, we chat with Professor Wendy Chun, who is Simon Fraser University's Canada 150 Research Chair in New Media. As both an expert in Systems Design Engineering and English Literature, her extraordinary analysis of contemporary digital media bridges the humanities and STEM sciences to think through some of the most pressing technical and conceptual issues in technology today. Wendy discusses her most recent book, Discriminating Data, where she explains what is actually happening in AI systems that people claim can predict the future, why facebook friendship has forced the idea that friendship is bidirectional, and how technology is being built on the principle of homophily, the idea that similarity breeds connection.


Wendy Hui Kyong Chun is the Canada 150 Research Chair in New Media at Simon Fraser University, and leads the Digital Democracies Institute which was launched in 2019. The Institute aims to integrate research in the humanities and data sciences to address questions of equality and social justice in order to combat the proliferation of online “echo chambers,” abusive language, discriminatory algorithms and mis/disinformation by fostering critical and creative user practices and alternative paradigms for connection. It has four distinct research streams all led by Dr. Chun: Beyond Verification which looks at authenticity and the spread of disinformation; From Hate to Agonism, focusing on fostering democratic exchange online; Desegregating Network Neighbourhoods, combatting homophily across platforms; and Discriminating Data: Neighbourhoods, Individuals and Proxies, investigating the centrality of race, gender, class and sexuality to big data and network analytics.


Transcript:


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

Today we’re talking to Professor Wendy Hui Kyong Chun, Simon Fraser University's Canada 150 Research Chair in New Media. As both an expert in Systems Design Engineering and English Literature, her extraordinary analysis of contemporary digital media bridges the humanities and STEM sciences to think through some of the most pressing technical and conceptual issues in technology today. In this episode we discuss her most recent book, Discriminating Data, where she explains what is actually happening in AI systems that people claim can predict the future, why facebook friendship has forced the idea that friendship is bidirectional, and how technology is being built on the principle of homophily, the idea that similarity breeds connection. We hope you enjoy the show.


KERRY MACKERETH:

Thank you so much for being with us today. Would you mind introducing yourself and telling us a bit about who you are, what you do and what brings you to the topic of feminism, gender, race and technology?


WENDY CHUN:

So I'm Wendy Hui Kyong Chun, I'm Director of the Digital Democracies Institute at Simon Fraser University, in Canada, 150 Chair in New Media, in the School of Communication. My work brings together various disciplines from engineering to critical theory, in order to understand the ways in which technology and culture shape each other. I'm particularly interested in the ways in which our current technologies amplify discrimination, but also the ways in which we can rethink technologies in order to intervene into some of our social problems.


ELEANOR DRAGE:

Fantastic, thank you. And the work that you're doing is wonderful. The paper I've read most recently looks at homophily. And I'd love you to explain that a little bit for us. Particularly because you were surrounded by these recommender systems, that means that we're in these kinds of echo chambers. And I think a lot of our listeners will be familiar with that. At the same time, I've always loved hearing about different modes of affiliation, different ways of thinking affinities between people. And I was kind of brought up (reading) Paul Gilroy and the way that he understands these different affiliations. That's Paul Gilroy - and I’ll put him in the reading list - so can you tell us a little bit about how you're thinking about the ways forward or ways out of homophily? These kind of distributed intimacies? How are you thinking about that?


WENDY CHUN:

So homophily is really fascinating, because it's become axiomatic within social media networks and network science more generally. So the idea that birds of a feather flock together, or similarity breeds connection. Now, what's fascinating about homophily is that it came from studies of biracial housing projects. And so echo chambers aren't an accident, they're indeed the goal. And one thing that's really important about studying homophily, and its social science roots is that it gets us away from assuming that technology would just be good if it engaged the social sciences and humanities. Because part of the problems that we're having with technology isn't its ignorance of social science or the humanities, but the ways in which it's automated certain problematic concepts. So homophily is really fascinating, because Lazarsfeld and Merton and the Columbia School of Applied Research, Sociological Research, were really trying to understand how to go forward in the US with the housing crisis. And so they looked at a biracial housing project and they asked people many questions, but three were, who are your three closest friends? Do you think that the housing project should be biracial? And do you think the races get along? And based on this, they decided that you were liberal if you thought that the races got along, and the housing project should be biracial, and you're illiberal if you said no to those, and you're ambivalent, if you thought the racist got along, but they shouldn't live together. Now, what's fascinating is then they said, Okay, look, liberals overselect each other, and illiberals overselect each other. So values are at the heart of homophily. But in order to make this conclusion, they not only ignored all the responses of Black residents, because they claimed there weren't enough who had ambivalent or liberal friends. But they also got rid of the white ambivalence, which was the largest category of white residents. And so it was a very ... and they only focused on their three closest friends. And they even said, you know, the illiberal overselection of liberals is itself statistically insignificant, because it was actually a very small number smaller than the number of Black residents, which they disqualified earlier in terms of illiberal or ambivalent friends. And so by looking at a very small portion, they map this thing called homophily, which is the idea that like associates with like, but they said, Look, we have all this evidence of other types of formations or friendships, there was a lot of friendships between Black and white residents or acquaintances, there was a far more complex picture. And so part of what the work has been doing is to say that, if there is problems within technology, maybe part of the way to address them is by examining the axioms embedded within technology, rather than assuming that all the discrimination comes from the data and etc, that's outside of it. Because if you go to the fundamental axioms, and then you dig deeper, then you can uncover - but not even uncover because they've not really ever been buried - the populations which are at the key of these, these concepts. And so part of the new book Discriminating Data goes back and tries to understand the populations that have been at the heart of these concepts which have been embedded in computing, and then say, Well, wait a minute, these populations were far more complex. If you go back to the archive, for instance, you have these instances of what Lazarsfeld and Merton would call heterophily. So they didn't think that homophily was the only game in town. In fact, they questioned the idea that homophily was the only game in town. And you also get notions of indifference, ambivalence, and for me that's really key. Because if you think of how connection works, a lot of what connects us is actually ambivalence, right? So there's a way of looking at the network map where you look at these strong and clear lines of connection. But my question is what needs to be in place for all these clean lines of connection to emerge. And so if we think of the network as the whitespace, then you start mapping and understanding the importance of ambivalence, as well as the ways in which these gaps are created, and what they're embedded. So for instance the answers of the Black residents in the presence of the Black residents are central to the emergence of homophily, even as they were excluded the questions were all about them, they are literally the gap that makes the network possible. So if we change our focus and start thinking about that, then we open up an entirely different world.


ELEANOR DRAGE:

Yeah, it's absolutely fascinating, what you're saying about how we choose which traits, we mean, when we say people are similar, or have traits in common that is a value hypothesis, and you put it so beautifully. So what has the internet done then to friendship?


WENDY CHUN:

Well, one thing you can say that it's done, so social media networks have done, is that for the longest time, friendship was difficult, because it was never bi-directional. Just because you like somebody doesn't mean they like you back. I mean, that's the fundamental, I know that you were brought up in a different era, I think, where Valentine's Day, either you didn't send out cards, or everybody had to send out a card to everyone. Whereas you know, we were in the agonising days of like, would you get one, you gave it to somebody but would your fellow eight year old give you one back? Which sort of encapsulated the fact that friendship is unidirectional. I mean, [Jaques] Derrida talks about this really beautifully. And so the idea of bi-directionality is implemented algorithmically within social media networks, because it makes the math and the prediction so much easier. So it forces a certain bi-directionality. What's also really fascinating is that the initial work on homophily was based on close friendships. Whereas now in social media networks, friendship has become far more loose, it would never be based on your three closest friends. So there's a way in which there's been an aspect amplification. Through this sort of … and weakening through this insistence on bi-directionality.


KERRY MACKERETH:

Yeah, that is so fascinating. And I actually want to loop back to something you were saying about earlier, kind of, you know, bringing up this super interesting kind of origins of homophily as a concept. And you know, I have to stake my own claim here, I'm really, you know, my own work, always thinking about sort of gendered and racialized histories of violence and how they play out in new technologies, specifically thinking about taking Orientalism and these particular patterns of racialization. But I'd actually love to hear a bit more about your new book that you've mentioned. Because I believe that you've mentioned to me previously that one of the things you look at in this particular book is kinds of the histories that lead to certain kinds of harms generated by new technologies and the like the foundations of some of our technical fields and much longer histories of scientific racism. So yeah, could you speak to that for a bit?


WENDY CHUN:

Sure. So the four concepts I focus on are correlation, homophily, authenticity and recognition. And to give you just one example, what’s fascinating to me about correlation. Because of course, we allegedly live in the era of big data where everything is correlation, correlation has made you know, has opened the futures, knows everything, does everything. Those same claims were made in the early 20th century by eugenicists, and the same tools, in fact correlation was developed, component analysis, linear and logistic regression by eugenicists, in order to understand the ways in which the future could be determined by the past on the understanding that certain traits were unchangeable through past and future, so what correlation did was allegedly find those unchanging factors and so you could amplify them in order to shape a certain kind of future. And so one question is, what does it mean that we have these resurgence of correlation and these tools used by eugenicists? And the claim isn't that anyone who uses statistics is a eugenicist. And clearly, there are some - but rather that if the world right now feels so closed, if it feels like there are no possibilities for radical difference or the future, it's precisely because we're using these tools that were based on closing the future. Actually, they were based on making sure that there is no true disruption. And bizarrely enough, if you go to the literature, Karl Pearson himself argued that learning itself was impossible. And so you have machine learning based on notions of “learning” which has nothing to do with what we would consider human learning to be. And so part of the goal is to understand the ways in which certain worldviews and certain notions of past, present and future have been embedded through these tools and their assumptions about what is unchanging. And the same thing I do with authenticity in terms of, well, how do correlations work? How is it that certain things repeat what is considered “authentic” and how is our call to authenticity actually a way to buttress certain correlations, and then with recognition, looking at the ways in which technological recognition is based on certain forms of discrimination.


KERRY MACKERETH:

That's so fascinating. And you know, just picking up off that last point, this kind of, you know, the hope vested in the concept of authenticity almost reminds me of work by, you know, Rey Chow on the issues with sort of valorizing the authentic, but I actually want to pick up on something you were saying around, you know, sort of eugenics and correlation. And just wondering, you know, on the one hand, like, we have this interesting idea of like, what's being passed down, which is unchanging, and on the other hand, you know, I feel like in AI, the field that we mainly work in, there's such a fetishization of sort of agility and flexibility and the idea of almost the contentlessness of these particular technologies. Is this kind of a dynamic that plays out in your own work, or, yes, do you have any thoughts on that?


WENDY CHUN:

Um, so I guess if you go to the basics of feature selection, and the notion that certain features are important because they discriminate well between classes. And so you think of things like Support Vector Networks, what you have there is a very explicit drawing from Ronald Fisher’s work on discriminant analysis. So what's presumed in advance is that different groups exist, and what you have to do is figure out the feature that discriminates best between these two groups, right, and then the feature has to be something that both populations share, but they have different “needs” right. And so there's a certain way in which the ways in which that's been taken off is very much already presuming difference and trying to figure out the best boundary between them in order to “recognise” what you've already discriminated between. So there's a weird logic there. But in terms of fluidity, I think the work on neural nets is really fascinating. Because on a certain level there's this, you know, claim that we can't understand what they're doing, there's so many layers, how can you comprehend what's happening? And then there's really interesting sociological hacks that go in to say, Okay, let's take this out. Let's see what this is. Let's see what features matter, etc, etc. In order to understand what's going on, like at a certain fundamental level it's logistic regression. It's like, at every level, logistic regression upon logistic regression upon logistic regression, it's like, choose between A and B, where there's a decision that there is a definitive quote “A” and “B”. So Even within these modes of fluidity, they're based on a certain profound binary logic in decision making, and not surprisingly, if you go to a lot of the literature, which claims that, you know, they've come up with machine learning for x, y, and z, it's usually based on a logistic regression problem like, choose between a and b, and that the programme is always given a and b, and it has to determine what is this, and what is that. And that's not how recognition in the real world even happens, let alone ... you know, so, so there's all sorts of ways in which, even within the systems, it's being framed in such a narrow way, that the generalizability of it is really called into question. But I think the difference between statistics and machine learning also gets to those questions of fluidity, as well as the bias-variance trade off, etc. And the modes of validation between traditional statistics and machine learning sort of gets at the ways in which the loosening of certain models allegedly gives you the future. So that's the difference between machine learning models and statistical models which are validated in a different way, which is to understand its error against the past. Right? So I think there are different ways to think through questions of fluidity and the overarching structures of rigidity.


ELEANOR DRAGE:

Ooh, yes, could you then talk about the predictive capabilities or this perception of prediction that we associate with machine learning, and that is still really not very critiqued by people who are perhaps buying in third-party technologies and imagine the systems to really tell them, to forecast what my workforce is going to look like in the future or things that really matter and determine whether people stay or go in a company. So I think of that as performative and I know that you do too, in a certain sense. And I'm sure that you can explain what that means a little bit better than me. So can you explain what it means then for a system to be predictive, given that this is something that's attributed to machine learning models and not to statistics in the same way, and what machine performativity means why and why we should care.


WENDY CHUN:

So what's so strange about machine learning models is that when people say they're predictive, they're validated and tested against past data, not future data. So when they say this is X and X amount accurate, or that it correctly predicted something, it didn't correct, like predict the future, it correctly predicted past data that was either set aside during the training phase from the same data set, or from data from a different data set. Right. But it's fundamentally the ability of the programme to predict the past, not the future. So the fact that there's the fundamental misunderstanding of validation of these models is really important because if these are … so the most basic example of courses is the discriminatory past, right? So if, if you think about these models, it's not simply that if the training data is sexist it will make sexist predictions, but they'll only be validated as correct if they make sexist predictions. And that perpetuation of these predictions as true profoundly shape what is considered to be true in the future, right. So in these models, what's true is what is consistently repeated: truth equals repetition. And that is a very, very narrow definition of truth. And that's not the definition of truth that we have for many models. So I always think through global climate change models: there, the point of global climate change models is for them not to be validated. The fact that we can validate their predictions means that these models failed, because they were supposed to give us the most likely future so that we would change our actions so that future would not come about. And in this sense, I do think machine learning models can be productive, because they show us that if we keep producing this sexist training set using this logic we will have this sexist future. And so if they could be used as warnings as ways for us to reach towards other kinds of futures. And so using machine learning models as probes is far more interesting, I think, and productive and far more true to their purpose.


ELEANOR DRAGE:

Absolutely, and I don't think it's a coincidence necessarily that by doing that it operates a bit like the science fiction that I read growing up and that I did my PhD on, which is imagining in the otherwise, and saying, Oh, if you're not careful, hang on a second, the world is going to look like this.


WENDY CHUN:

Yeah, no. And Beth Coleman has been doing great work around speculative AI. Or you can think of the Indigenous AI protocol as well. There's been a lot of people trying to think through this space differently, with Kate Crawford I'm leading a working group called Machine Unlearning, which tries to understand what would machine learning look like if it actually engaged learning, true forms of learning, but also critiques of learning. So drawing from Ariella Azoulay’s critique of the ways in which learning as it's currently formulated, encapsulates problematic notions of progress. And also calling on us to think about the past not as past, but rather, as worlds that are still with us. So to treat people not as primary sources but as potential companions or Kara Keeling’s work about listening to the worlds that are still with us, and what is the need to listen and to engage with these.


ELEANOR DRAGE:

Those are amazing pointers, and we'll put them in the reading list for anyone who's wondering what those texts are. So we're talking here about the counterfactual, the ‘what might have been’, and ‘what could yet be’, and you discuss that a bit in your work in relation to archival work, not considered super sexy, but extremely important and discovering the counterfactual. So what have the two got to do with each other?


WENDY CHUN:

So I think what's great about Ariella’s work is that she argues that what we need isn't a counterhistory. It's not the idea of discovering a history that is counterfactual that hasn't been outlined, but understanding potential history. So the ways in which these potentials and these worlds are still with us. And to imagine them as past is part of the problem. So what I find really intriguing and important about going to the archives is not because we're discovering a counterfactual history, I mean - and what does it even mean to discover segregation and discrimination within us residential housing in the mid 20th century? I mean, if you need to discover that, then there's something very problematic. But rather to think through the ways in which these voices were both … trace routes - just to go through and understand the traces of these voices, and to really try to see ourselves as ‘so what if whenever we use these concepts, we're touching these populations?’ What if every time we think through these concepts, we reopen and rethink through the relations that made these possible, so that segregation doesn't become something that's passed, because clearly, it's not in the US and in many places, but rather something that's re-lived and reanimated every time we use these concepts? What else could emerge? How would that force us to think differently about the very world we live in?


KERRY MACKERETH:

And I want to ask you, as well, that kind of like drawing back on what you were saying, I just find it really fascinating, this idea of sort of truth as repetition and machine learning models. And that kind of almost reminds me of like Sara Ahmed's work, this idea of you know racism offering a different account for the world and the ways in which machine learning offers such a singular account of the world. It's a pivoting slightly, you know, we're feminists podcast, and we're really interested in what feminist theory and ideas can offer to thinking differently about machine learning and AI, but also more broadly about technology in general. So we want to ask you, what does feminism mean to you and how does it influence the way that you think about technology?


WENDY CHUN:

So I have a very strange background; I started off in engineering, and then received a PhD in English literature and then moved to media theory. And that trajectory has everything to do with grappling with the need for feminism. So when I was an undergraduate, the Montreal massacre happened. So a man walked into an engineering classroom, separated the men and the women and started killing the women. And one of the women in the classroom said, Look, we're not feminists, we're just women in engineering, trying to lead a normal life. We're not - I forget what she said exactly, which was something like ‘We're not against men, we're not protesting on the street. We're just girls in engineering’. And he said, ‘You're women, you're in engineering, you're feminists, I hate feminists’, and he shot her. Um, and one thing that that that really provoked for me was to understand what, what, how does one even comprehend the violence around one. Because in engineering, the classrooms are very tight. So you're with this group of people for five years, it's like having, for me like 12 sisters and 40 brothers, right. And there's a way in which it's so focused on notions of “meritocracy” you're graded, etc, etc, that you're, you're you're not given the the ability to understand the violence and the discrimination, that is actually being perpetuated by these structures as well. So I was also in co-ops, so you work for four months, you know, I was the only woman in these, these all male engineering places, you know, I really felt like a pair of walking ovaries. And the racial politics were also very complicated, so when I was at Gandalf Data, which I loved, because I was working with hardware, and I really love hardware, and I was working on 10Base-T converters, I was the only ... one of the few females. And then I would go to the lunch room. And I noticed that there were all these other Asian females, I have no idea where they were from. And there were just a few of them. And then they gave us a tour of the engineering facility. And that was filled with Vietnamese-Canadian women who were building the hardware in the basement that we would never see except for the few who came to the lunchroom. And so all around me was evidence of discrimination, in what was really problematic racial politics, as well as violence. And I turned to the humanities in order to understand the limitations of insisting on a certain form of equality in the face of active inequality, the ways in which that just left you exposed, and really a victim in a way, by declaring you weren't a victim. And so it was through the move into the humanities and thinking through feminist theory, as well as critical theory, Gender and Sexuality Studies, to really help me understand the ways in which these, these were all very intertwined. And what's fascinating, of course, about the story about the woman in the Montreal massacre is that she later became a feminist, very openly, in that she declared that she didn't know it at the time, but that she absolutely did need feminism, and that she wrote a moving letter to her daughter, about the necessity for feminism. And so I think that thinking through the ways - and what was really problematic about the aftermath of the Montreal massacre is that it became this really public debate between prominent feminists who said, this is violence against women and certain survivors. And it wasn't all survivors, it was just certain survivors who said, this is our tragedy - this isn't about violence against women, this is about violence against us. And what shifted was the fact that they found Marc Lépine - he's the person who committed the atrocity, they found his letter in which she actually listed prominent Quebecois feminists as the people he was going to kill, but he killed these other women. And so that opened up the and now December 6 is a national day of acknowledging gender violence and dealing with gender violence in Canada. So I think that what's so key for me about feminism is understanding the limitations of facially equal systems so that we can actually have equality. And I think you need feminism in order to have that happen.


KERRY MACKERETH:

Eleanor and I just want to say thank you so much for appearing on our podcast. This whole discussion has been so fascinating. It's been a real privilege to chat to you. So thank you so much.


WENDY CHUN:

Oh, thank you. And thank you so much for the conversation. It's been really lovely to talk together about this.






84 views0 comments
bottom of page