top of page
Search

Blaise Agüera y Arcas on Debunking Myths in Technology: Intelligence, Survival, Sexuality

In this episode, we talk to Blaise Agüera y Arcas, a Fellow and Vice President at Google research and an authority in computer vision, machine intelligence, and computational photography. In this wide ranging episode, we explore why it is important that the AI industry reconsider what intelligence means and who possesses it, how humans and technology have co-evolved with and through one another, the limits of using evolution as a way of thinking about AI, and why we shouldn’t just be optimising AI for survival. We also chat about Blaise’s research on gender and sexuality, from his huge crowdsourced surveys on how people self-identify through to debunking the idea that you can discern someone’s sexuality from their face using facial recognition technology.


Image by: Max Gruber / Better Images of AI / Ceci n'est pas une banane / CC-BY 4.0


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

Today, we’re talking to Blaise Agüera y Arcas, a Fellow and Vice President at Google research and an authority in computer vision, machine intelligence, and computational photography. In this wide ranging episode, we explore why it is important that the AI industry reconsider what intelligence means and who possesses it, how humans and technology have co-evolved with and through one another, the limits of using evolution as a way of thinking about AI, and why we shouldn’t just be optimising AI for survival. We also chat about Blaise’s research on gender and sexuality, from his huge crowdsourced surveys on how people self-identify through to debunking the idea that you can discern someone’s sexuality from their face using facial recognition technology. We hope you enjoy the show!


KERRY MACKERETH:

Thank you so much for being here today. So just to kick us off, could you tell us a bit about who you are, what you do, snd what brings you to the topic of feminism, gender and technology?


BLAISE AGÜERA Y ARCAS:

Thank you so much for having me on. I'm Blaise Agüera y Arcas, a Fellow and Vice President at Google Research. I've been leading a team at Google for the past eight years that works on AI in embodied forms. Rather than thinking about ‘giant AI’ in the data center, we've been working on various forms of machine learning artificial intelligences that typically run on people's devices. We have created most of the AI features on Android and Pixel phones. We've also done a lot of work at the intersections of privacy and machine learning. For example, we’ve always found the idea that training models should always involve collecting lots of data problematic. We've invented various techniques, like federated learning, which has become a pretty big deal over the last few years, to be able to train models without surveillance. That's the kind of work that we do. As for how I came to this topic, I was at Microsoft for seven years before Google, and have been interested for a very long time in the problem of unequal representation in the workforce. I've always done whatever I can to try and correct it in small ways. We're very, very far from getting to the right place. The lack of representation in tech is especially egregious given that our entire field of computer science was basically founded by women, who were key to the development of it at various stages. Since the 80s, it feels like we've really moved backward. The way technology is made typically has not represented the needs of all humans equally. As both an employer and a technologist making things, it's been a big focus for me for the last 15 years.


ELEANOR DRAGE:

We've been following your work closely so it's good to hear you explain it in more detail. I know that your wife, Adrienne Fairhall, also does phenomenal work at the Fairhall lab.


We're The Good Robot, and we'd like your take on our billion dollar questions, which are, what is good technology? Is it even possible? (I feel like you're going to answer that in the affirmative!) And how are you working towards it?


BLAISE AGÜERA Y ARCAS:

I'm not a techno optimist, so it’s not as if I believe that technology is somehow automatically good. I think that a lot about a technology’s goodness* or badness* - and I put asterisks by both of those terms - is a question of design. The sense that there is a good or a bad independent of perspective is one that we should question. In reality we are all on a social graph - not just individuals but nonhuman entities, ecosystems, companies, sports teams, political parties, and countries. They all have relationships with each other. They each have a will, and a sense of what is positive or negative. Anytime technology is mediating one of those interactions, or augmenting one of those ‘nodes’ in that graph, it can have all sorts of effects, both positive and negative with respect to any given perspective on things. At a company like Google, we make things that enable a lot of nodes in that interaction to not only mediate a relationship between Google and a user, but between people or other sorts of entities. The trick is to take a step back and ask what sorts of tweaks or modifications on those mediations will improve or harm the subsequent interactions. It's not possible either to take a utilitarian standpoint and to try and make a sum over everything and maximize it. What has to be done is really quite subtle. Thinking about good or bad technology forces you to ask some of those really profound questions about ethics, how to define some of these terms and confront some rather complicated, if not ugly, proofs of nonexistence about optimization.


KERRY MACKERETH:

Fantastic, thank you. In the field of artificial intelligence we talk a lot about intelligence without always necessarily actually engaging with what we think that is. And so we want to ask you, what do you see intelligence as being and who do you think that it belongs to?


BLAISE AGÜERA Y ARCAS:

I see intelligence as being a property of that whole graph of social relationships, rather than being a property of the individual nodes, whether they're people or otherwise. I think that our obsession with intelligence as an individual property that resides in a single brain is a Western problem, and a problem that you can think of as being similar to the way we try to attribute art to a single great creator, or the great man theory of history. That's not the case at all. There are so many examples to draw from - there's a book by Lothar Ledderose, a German professor of the History of Art of Eastern Asia, called the Ten Thousand Things: Module and Mass Production in Chinese Art, in which studies artifacts and monuments like the terracotta warrior army of Qin Shi Huang, the first Emperor of China, and the temples of Cambodia. These are obviously the beautiful artistic expressions of an entire culture. They can't be attributed in any meaningful way to a single artist. I think that's true of everything that we do, to the extent that an artist does something singular that seems really remarkable but what they've done is to essentially cover up their tracks as well as they can and also perhaps internalize a lot of critics and other actors in an effort to bring some of that network in house. But in reality, intelligence is this thing that operates across the entire network and is not, in that sense, limited to the human at all. Even when we talk about entities like a company we often say ‘is the company acting intelligently’ as in being smart with this or that decision. That already implies a nonhuman entity that is intelligent. AI is not new with respect to the attribution of intelligence to non-human entities.


ELEANOR DRAGE:

The importance of thinking intelligence collectively is something that marginalized communities have also been forced to learn over time, in order to mobilize a more powerful collective. Before I ask you something else, can you briefly tell us exactly what kind of damage Western conceptions of individual intelligence are doing in the field of AI?


BLAISE AGÜERA Y ARCAS:

There are so many issues that arise when one makes that attribution error. One of them is that it stops you from thinking about collective harms. When we think about, for example, the dignity of a group of people who identify in a particular way, they can’t be thought about as a mere collection of individuals who might benefit or be harmed by a technology. Any group is greater than the sum of its parts. Whenever we talk about discrimination, for instance, it's necessarily a collective harm that we're talking about. If we were to take the harms done to members of a certain community and distribute them uniformly across the entire population, according to a utilitarian or individualistic perspective, it's a wash, those two - the harms and the population - are equivalent. As we know, those two are not equivalent. The moment you start to think about about discrimination or injustice, or oppression or privilege, you have to think about collective entities with respect to good and ill. In that sense, even any notion of justice requires consideration beyond its application to individuals.


ELEANOR DRAGE:

That’s really interesting - and to your previous response, it’s great to hear people from STEM disciplines talking about feminist ideas in the language of nodes and graphs. Some of my students have STEM backgrounds and they can often feel a bit excluded from the humanities way of thinking about concepts from gender studies. It’s an interesting way of rethinking intelligence and the individual; Kerry and I are fortunate to work with Steven Cave, who also does great work on the history of intelligence in relation to AI.


People may have a sense of how humanity has developed through technology over time, the impact that technologies from the wheel to the hearing aid have had on us over time, on what we can do, where we can live, and who is included and excluded. But what might people not know about the coevolution of humanity and technology?


BLAISE AGÜERA Y ARCAS:

Well, there's a quote by the Australian performance artist Stelarc that I like a lot. He says, “technology is what defines the meaning of being human, it's part of being human”. You've alluded to that in your question. I'm a very firm believer in that idea. There are very literal cases, like the fact that we have short guts because we invented fire and so could cook food, and fire has therefore come back around and shaped our bodies. I think the same is true of our lack of body hair. Our clothes are our body hair. In that sense, when people talk about natural foods and not eating GMOs, I think that's a bit of a fallacy in the sense that every food that we eat is heavily engineered, and it has been for around 10,000 years. We've been engineering ourselves for a lot more than 10,000 years, probably closer to 7 million. Another way of thinking about technology is as an inorganic growth of our bodies or our intelligence, in the same sense that a snail shell is a kind of technology. I think about technology as being bound up with humanity in the same way that a snail shell is bound up with a snail. The difference, of course, is that not only is technology augmenting our bodies, but now with AI, it augments our minds in very non-trivial ways. Although then again, you can think about language as a technology that has been augmenting our minds for hundreds of thousands of years. So I don't see a boundary between humans and technology, and that's one of the reasons that I've always been very puzzled by the idea of ‘othering’ technology. Sometimes people imagine that machines are so influential that there is a voice coming from the television telling them what to do. This particular form of paranoia has existed all throughout history. It is, in effect, a misattribution of agency. I think that we're guilty of the same kind of misattribution when we talk about technology wanting things that are independent of us, or when we talk about us being in competition with our technology in some way. We're always competing and collaborating with each other in all kinds of ways. So the idea of technology as being ‘other’, as alien, is pretty weird when you think about it in the context of the snail and its shell.


KERRY MACKERETH:

That's really fascinating. It brings me to a question about the language that is used to describe human and AI evolution, development and change. I was really struck by the evolutionary metaphor that was used in the film Ex Machina, when the white female robot is looking at the faces on the wall which show her own evolution as the latest stage of humanity. Later in the film, there’s a parallel of that scene when she sees all the robots who came before her. They're all very racialized, and their bodies have been broken up, stripped apart, showing how racial violence underpins the evolution of both human- and robot-kind. I want to mention this because I think the field of AI is increasingly using these metaphors of evolution. It raises questions about what machines should be optimized for. Should they be optimized for speed, agility and accuracy? Are these the characteristics we think humans should also be optimized for? Parallels are being made between what people think that humans are ‘programmed for’ and what we should be optimizing machines for. What are the misconceptions that underpin this comparison? And how has this evolutionary model become a guiding metaphor in AI?


BLAISE AGÜERA Y ARCAS:

That's a really profound question. There are a lot of directions that we could take it in, let me try this one: for me, that word optimization is one of the biggest issues here. For one thing, the whole field of AI, of machine learning, is founded on optimization - on having a well defined loss function or metric that one is optimizing for in any given system. That, in turn, has been connected with Darwin or Spencer’s idea about life being optimized for survival, as if human life was like playing a hard video game against nature and winning by maximizing the number of its progeny. That perspective on nature is completely wrong, in the sense that what persists, exists. There’s often nothing particularly optimal about the crazy things that nature has done: the tail of a peacock, the mane of a lion, all these weird excesses. What has hung around over time is what we’ve got now. The problem with thinking about life in terms of optimizing some function is that it only works if you imagine that we exist in a fixed game where the environment is unchanging as a result of whatever the player is doing. The player just gets to have a score. This is how DeepMind started their work, with games that are fixed, like chess or AlphaGo, and then and then you can talk about optimizing the player relative to that game. The problem is that this doesn't take into account the fact that everything is about interactions. It's all players all the way down. The game is everybody else. The reason that we have large brains, in my view, is almost certainly not about trying to outwit nature with some tooth and claw series of innovations, but rather trying to outwit each and outmodel each other. Over the last 7 million years we have undergone an intelligence explosion in which our heads and brains have grown dramatically. Nature didn't suddenly get harder over these 7 million years, what happened was that we began to socially model each other, and that created an arms race between people. This arms race was a cooperative one as much as a competitive one. Maybe there's no difference between cooperation and competition, when you look closely. Perhaps it’s really just a matter of perspective. It might look locally like optimization. But the moment you step back it no longer looks like optimization at all. It looks like a dynamic system with attractors and chaos and all kinds of things going on.


ELEANOR DRAGE:

Again, it’s good to hear this from a different disciplinary perspective, because it’s a major theme in queer theory, where writers like Lee Edelman in No Future and Jack Halberstam have critiqued ‘reproductive futurity’: the idea that you’re not really part of the human race if you’re not getting married and reproducing. Disability scholars and activists like Kenny Fries have also heavily critiqued the premise that disabled people do not contribute to humanity’s evolutionary imperative. It’s a fallacy that has had extraordinarily violent implications. So how exactly are you seeing these ideas about optimizing for survival playing out in AI?


BLAISE AGÜERA Y ARCAS:

It's a wonderful example that you raise. There’s a great book by the anthropologist and primatologist Sarah Blaffer Hrdy called Mothers and Others: The Evolutionary Origins of Mutual Understanding about alloparenting (care provided by individuals other than parents). She talks about how one of the rare and special things about humans, which makes us unique relative to all the other apes, is that we alloparent, meaning that you need ‘babysitters’ in order to help raise a baby, the mother can't produce the 13 million calories that it takes to raise a child to self-sufficiency. We have complicated adaptive and cooperative child rearing processes, which she argues is why we have become emotionally invested in each other and why we have intersubjectivity. I buy her thesis. In other alloparenting species, the most extreme cases or the most skewed cases are ones in which only one female does all the reproduction, and everybody else is in a supporting role. That's true of certain eusocial insects and it’s true of naked mole rats and other mammals. There's also a whole intermediate range, which are the ones where it literally looks like one super organism with a particular part of the body that is the germline, that corresponds to one individual or a handful of individuals. That's how our own bodies work: all of our somatic cells are not busy propagating the species, that's only the germline, the ovaries and the testes. If somebody is thin and somebody is fat, you don't accuse the thin person of not doing the right thing by the human race because they've got fewer cells. That wouldn’t make any sense. This idea of perennial growth in the number of individual humans as being what the propagation of humanity is all about is almost like saying, we need to gorge ourselves, we need to become as fat as possible in order to to propagate humanity. Perhaps it's not just numbers that matter. Thank goodness we're now undergoing a turn - thanks to birth control - in population numbers, because we're above the carrying capacity of the Earth. Our path to survival actually requires that we reduce our numbers on Earth - at least not the other way round.


The arguments that you're talking about are also ones that have been invoked over and over by eugenicists working in the wake of Darwin and Spencer. There was a panic in the early 20th century about the poor out-reproducing the rich, which is still very much the case. If you plot the total fertility rate of a country versus its GDP or any other measure of development you have a very strong correlation. A high reproductive rate is correlated with low GDP. This was already noticed as early as the 1800s, or even the 1700s by some accounts. I think this to some extent underlies English economist and demographer Thomas Malthus's argument that population growth will always outrun the food supply and humankind requires stern limits on reproduction, The answer that white rich people have to start reproducing faster or get on the business of sterilizing the other races lest they be outcompeted is so obviously mad. I don't understand why we haven't connected those debunked and incredibly problematic arguments with the current new round of essentially eugenic thinking.


ELEANOR DRAGE:

It’s terribly sad how it’s had to take a heightened awareness of global warming to debunk some of these assumptions around evolution. I want to ask something slightly different now. You've said that for tech innovation to progress the number and density of the humans who interact with an invented technology need to be high. Can you explain what you mean by that, and what that means for excluded communities?


BLAISE AGÜERA Y ARCAS:

Because the interactions between people or between intelligent nodes, if you like, are fundamental to the rate of ideation, the rate of new ideas, new thoughts, new arts, new technologies, it follows that urbanization has been a really big deal with respect to accelerating human progress. There have been lots of studies - some of the fun ones come from the Santa Fe Institute - that show for instance that the number of patents, creative works, and all kinds of other things, grow as a function of density. They're greater in the cities than in the countryside per capita because there are more connections. This is one of the reasons that minority communities have always gravitated to cities. It's a bit of a self-fulfilling prophecy, because you can't really create a culture or find ‘your people’ if the density is too low. It's one of the reasons that North American biologigist and sexologist Alfred Kinsey wrote about gay culture as being fundamentally urban. Even if the attraction to your own gender is universally distributed, all of the cultures that surround gayness, the things that make you not just homosexual but ‘gay’, are a function of all the interactions within the gay community that can only happen when the density is increased. And of course, up until the internet, that meant moving to the city. That has resulted in a sortition of people where all of the minorities are concentrated in cities. The same is true for immigration and for many other forms of minoritisation within a larger dominant majority. That sortition has also been one of the drivers of political polarization, because the result is a more culturally homogenous countryside and a more heterogeneous urban environment. In urban environments, therefore, time flows faster because of the greater rate of cultural innovation. You get a tidal shift in which time goes faster and faster in the cities and then there’s a schismogenesis between the city and the countryside. We see the effects of that in politics, not only in the US, but everywhere in the world, now that urbanization has increased.

ELEANOR DRAGE:

That’s fascinating. Queer rural histories have been so under acknowledged over time, and it’s interesting to see how that sits in relation to these global trends. The Museum of English Rural Life recently ran a project that uncovered the experiences of queer rural people in the UK. Mary L. Gray, ‎Colin R. Johnson, ‎Brian J. Gilley, John Howard, and Ryan Lee Cartwright are good places to start.


KERRY MACKERETH:

Yes, absolutely. This brings us nicely to the really interesting work that you've been doing around identity, technology, and self perception as part of an upcoming book. For this study, you sent out a huge number of surveys, asking big questions to people all across North America: Who are you and what do you believe in? So, what did you find out from the surveys?


BLAISE AGÜERA Y ARCAS:

Well, it's a giant piece of work. I used the crowdsourcing platform Mechanical Turk to reach people, which is not perfectly representative. It over-represents young and under-represents older people, for instance, and there are gender imbalances as well. But you can correct for those things. And you can ask a lot of questions about identity and beliefs. I did that over the course of five years. I've sometimes joked with friends that this book is a little bit like a cross between a mechanical turk-powered McKinsey & Company report for our generation, plus Yuval Noah Harari (author of Sapiens), because it asks questions about what a human and a posthuman future looks like. Fueled by those surveys, it's a bit of an odd duck sort of book. Some of the main findings are that young people are much queerer than older people, urban people are much queerer than rural ones. There’s an increasing dissociation between identity and behavior, especially in the city. This is because cities are the crucibles for the creation of culture. The Internet changes all of this a bit because you can create virtual cities with online communities. More connections that can happen in that way and space can become more than two dimensional. But still, city and countryside really matters. The people that you meet, even in pandemic times, people that you know from your physical neighborhoods still matter. And here’s what I mean by the association between behaviour and identity: if you're bisexual, but you happen to have been in a heterosexual relationship for many years, I think most most urban dwellers would say you don't have to hand in your bi card. That's a part of your identity that's independent of what you happen to be doing right now. That attitude, that identity is something that that persists and is independent of the of the behaviour of the moment, is much more the case in the city because these issues of identity have a lot more to do with culture as opposed to just being descriptors of either what what you are doing, what you feel at the moment, or what your behaviors are. We see this across the board, in every situation in which an identity can be thought of as distinct from the behaviours that it describes. That distinction is alive in the city and in the countryside, there's more of a ‘Why would you call yourself bi if you're not currently sleeping with both men and women?’ I’ve seen many similar examples of that.


ELEANOR DRAGE:

What an interesting study. It’s quite amazing how tools like Mechanical Turk can make this kind of information harvesting possible, although we know these tools are complex and have plenty of downsides, including the well-publicised labour conditions that many - not all - workers endure. The fact that crowdsourcing tools have enabled us to ask questions on this kind of scale is still pretty phenomenal.


BLAISE AGÜERA Y ARCAS:

For what it's worth, I made sure that I paid all of my respondents well - I was definitely on the high end. But yes, it is an exploitative market in some ways. Although, I also have to say that my experience as a requester complicated my initial assumptions or feelings about it. There are many, many reasons that people do this kind of gig work, and people are in many different sorts of circumstances with respect to it. There's not one story.

ELEANOR DRAGE:

Given that today we're debunking quite a few myths associated with technology, on the subject of sexuality, which you brought up in relation to the urban rural divide, can you tell us: why can’t you tell someone's sexuality from using AI to examine their face? What did you learn while writing this paper? To me, it seems like an exercise in gender studies! You always learn something along the way. What was your experience?


BLAISE AGÜERA Y ARCAS:

Yes, that was a wonderfully fun paper. It was a collaboration with Margaret Mitchell and Alexander Todorov (author of Face Value: The Irresistible Influence of First Impressions). Alex specializes in the social perception of faces and is based at Chicago Booth. Margaret was working with me at the time at Google. We were responding to one of several articles that was trying to revise and revive the practice of physiognomy - of attempting to read people’s essential characteristics from their face. The paper that we critique made a lot of waves at the time by claiming that from a selfie alone, an AI-powered face recognition system could do a good job of detecting whether you were lesbian, gay, or straight. It took selfies that people uploaded on Facebook and OkCupid to make that diagnosis. Supposedly it was *very accurate*, they said it had a 90% accuracy or something, which is certainly not accurate for lots of uses, but still a remarkable result. But it turns out that what the system was almost certainly picking up on was some combination of grooming: makeup, eyeshadow and other sorts of things. Most entertainingly, the angle at which you take the selfie matters. So if you take the selfie from above, then it'll make your chin look smaller, it'll hide double chins, like the ones that I've got, and it'll make your eyes and forehead look bigger. If you take it from below, then your jaw will look bigger, you'll also look a little bit more frowny. If you try this with a cell phone, you'll see that your mouth appears to change shape depending on the angle. To cut a long story short, straight women will generally take their OKCupid selfies from above, straight men will generally take their OKCupid selfies from below. Gay men and women will generally shoot from straight in front, and there's a straightforward explanation for that, which is that it’s the expected height of the person that you're after. There's also a really interesting dominance hierarchy that comes in when you start to realize that - at least this is my pet theory - your smile and frown may actually be a simulation of what the face looks like from above and below. Anyway, the paper that we critiqued thought that they discovered something very interesting, but I think it was exactly the opposite of what they imagined they were discovering. This is not about intrinsic properties of the human face, that somehow they are a window into the soul, but rather how in some cases, the face exhibits quite interesting and subtle aspects of self-expression that are all about social signaling.


KERRY MACKERETH:

What a fantastic study - so interesting. We just want to say thank you so much for appearing on the podcast, it really was a delight to be able to have this conversation and we hope to be able to chat again soon.


BLAISE AGÜERA Y ARCAS:

This was really fun - the questions were fantastic. Thank you so much for your work in this area and for the wonderful questions and for and for your curiosity. I really appreciate it.


ELEANOR DRAGE:

This episode was made possible thanks to our generous funder, Christina Gaw. It was written and produced by Dr Eleanor Drage and Dr Kerry Mackereth, and edited by Laura Samulionyte.




74 views0 comments
bottom of page