top of page
Search

Virginia Dignum on Engineers, Values and Giving the Public A Voice

Virginia Dignum is a Professor at the Department of Computing Science at Umeå University, Sweden where she leads the research group Social and Ethical Artificial Intelligence. She a Fellow of the European Artificial Intelligence Association (EURAI) and she is also an associated with the Faculty Technology Policy and Management at the Delft University of Technology. Given the increasing importance of understanding the impact of AI at societal, ethical and legal level, she is actively involved in several international initiatives on policy and strategy guidelines for AI research and applications. As such she is a member of the European Commission High Level Expert Group on Artificial Intelligence, of the IEEE Initiative on Ethics of Autonomous Systems, the Delft Design for Values Institute, the European Global Forum on AI (AI4People), the Responsible Robotics Foundation, the Dutch AI Alliance on AI (ALLAI-NL) and of the ADA-AI foundation. Her research focuses on the complex interconnections and interdependencies between people, organisations and technology. It ranges from the engineering of practical applications and simulations to the development of formal theories that integrate agency and organisation, and includes a strong methodological design component.


Image from Better Images of AI: Image by Alan Warburton / © BBC / Better Images of AI / Nature / CC-BY 4.0


Alt text: A photographic rendering of a succulent plant seen through a refractive glass grid, overlaid with a diagram of a neural network.


Transcript:


KERRY MACKERETH:

Hi! We're Eleanor and Kerry. We're the hosts of The Good Robot podcast, and join us as we ask the experts: what is good technology? Is it even possible? And what does feminism have to bring to this conversation? If you wanna learn more about today's topic, head over to our website, where we've got a full transcript of the episode and a specially curated reading list with work by, or picked by, our experts. But until then, sit back, relax, and enjoy the episode.


ELEANOR DRAGE:

Today we’re talking with Virginia Dignum, Professor of Responsible Artificial Intelligence at the University of Umeå where she leads the Social and Ethical Artificial Intelligence research group. We draw on Dignum’s experience as an engineer and legislator to discuss how any given technology might not be good or bad, but is never valueless; how the public can participate in conversations around AI; how to combat evasions of responsibility among creators and deployers of technology, when they say ‘sorry, the system says so’; and why throwing data at a problem might not make it better.


KERRY MACKERETH:

Thank you so much for joining us today. So could you please introduce yourself, tell us a little bit more about what you do and what has led you to work on the social and ethical impact of technology?


VIRGINIA DIGNUM:

Okay, thank you very much for inviting me, it's my pleasure to be speaking to you on The Good Robot podcast. My name is Virginia Dignam. I'm a Professor of responsible official intelligence at Umeå University in Sweden. I have been working on artificial intelligence since the mid-80s, so it's been quite some time now. And I have been working both with industry and in academia, and throughout my quite long career by now, I have seen many different approaches to AI, many different hypes and [obsessions] around AI. But the constant in my work is trying to understand how these systems can be built in a way that they are of benefit for those who use them, either in organisations or in individual users, or society in general. When I started [my career], I was working for industry in a large insurance company in the Netherlands, who were introducing what at the time were called expert systems, that were changing completely the way that people were working, in some cases, with the beneficial changes, but in other cases with some type of more troubling changes. And that's what led me to try to understand what exactly is the impact [of AI], and how can we make this impact as beneficial as possible?


ELEANOR DRAGE:

Our podcast is called The Good Robot so we'd like to ask what good technology means to you - can there be such a thing as good technology? And would, for example, it be optimised for efficiency or for another quality?


VIRGINIA DIGNUM:

Very good question. Technology is in itself not good or bad, we can make good or bad uses of technology. If you talk about good technology, the first thing that comes to my mind is technology that is well built, that is safe, that’s correct, that doesn't crash, that doesn't get into very strange or unexpected things. So robust technology, that would be for me as an engineer the first association with good technology. But actually the good of the technology is the good that we make of it. Like any other technology, like any other tool, can be used for good or for bad. And [how we use technology] depends a lot on our own approach, our own incentives, and our own motivations. Indeed we see in AI that a lot of what is being developed is with this view to optimise for accuracy, optimise for speed, optimise for technical qualities, but it doesn't mean that we cannot really do this optimization also for societal impact, for privacy, for safety, for fairness, for inclusion. We can look at those principles also as motifs or as requirements for optimization. And that means that we can really look at good technology from the perspective of a beneficial approach to the use of technology in society.


ELEANOR DRAGE:

Your answer has also complicated what it means for AI to go wrong. And so our next question is how do we deal with AI that goes wrong? And what does it mean for AI to ‘go wrong’ from an engineering perspective, or from a regulatory perspective?


VIRGINIA DIGNUM:

AI can go wrong, any system can go wrong, in two ways. So as I said, from an engineering perspective, AI goes wrong when we build it incorrectly, when we haven’t been explicit about the requirements, the objectives and the functionalities of the system, when the system is built in an incorrect way, for example by not using the most appropriate techniques or components. And we see that nowadays a lot with the over-emphasis on data-driven approaches to AI. There are lots of approaches to building AI, of which data-driven ones are the ones which are used, traditionally, in deep learning, in neural networks, in machine learning. These are extremely data-driven, and we are using those technologies without often really considering other approaches. There are many other approaches to making AI, which focus more on, for example, knowledge representation, on searching and planning capabilities, and those techniques are at this moment not used very much because everybody thinks that if we can throw more data at a problem, or if we can throw throw more computational power to that problem, we will be solving it better. So that is when I think that from an engineering perspective, AI can go wrong, because we don't think about the full palette of opportunities that are there, the diversity of technologies that are there that can be used, and we just go for the quickest and the easiest. We go straight to the most hyped technology or technique instead of thinking about what is the problem and what is the best technique for this specific problem. In terms of AI impact things can go wrong when we as people don't take enough care of the responsibility that we have for the way that we are using these types of systems. It’s extremely easy to pass the responsibility or the decision power to the system. And to hide ourselves behind the idea ‘sorry, the system says so’. And we see that in many, many different levels and applications that’s the easy way out; we can just forget our own responsibility and hide ourselves behind whatever the system is doing. And that's actually quite worrying, too, as we are the ones who should be accountable and should take the responsibility for what the systems are doing, as well as for the decision to use a system to take a particular decision in the first place. And this is where I think discussion is needed, particularly around the regulatory incentives that are required to foreground our responsibility for what we do with these systems.


ELEANOR DRAGE:

Something that we're really interested in is the regulation of AI, as you just talked about, and the challenge of creating AI ethics frameworks and toolkits that are really useful and properly ethical, making a meaningful difference. So we're seeing feminist and anti-racist knowledge and methodologies is translated extremely effectively into data science guidelines, but practitioners are still on the hunt for ways of making toolkits practicable. So in your view, how can we effectively translate these toolkits into practice?


VIRGINIA DIGNUM:

Very good question. And indeed, this is a challenge which we all are noticing at the moment. I think that there are a few ways to do it. We can think about the integration with standard software development tools, integration with all these methods and processes that practitioners use to develop software, and bring the AI ethics as functional and nonfunctional requirements to the practice of engineers. So it's not something which you do at the end of the process, which is often what people do when they say, okay, we have built our system, it works. Now we have this extra burden of adding ethics to it. I think that attitude needs to change. We need ethics and responsibility and participatory approaches to be central to the design of systems. It needs to be part of it from the very beginning. It’s not an add on, it’s a fundamental part of the development of systems. We have to really work on the integration of these issues with standard software and system development approaches. At the same time, we also really need to bolster the awareness of both practitioners and the general public of how we, as consumers and as citizens, also have the choice to decide and demand systems that are built and integrated with fundamental principles of ethics, human rights and societal values: we can demand this. And we really need to make the public aware that they have this voice in the discussion. And that means that policymakers, governments and international organisations, also need to create incentives and an environment in which the public voice can be heard in the conversation.


ELEANOR DRAGE:

Kerry and I take feminist ideas as our starting point for investigating the value systems that are embedded in technology. In your work on the decision-making capabilities of intelligent agents and other synthetic entities, what informs your understanding of which kinds of moral and ethical standpoints they should take?


VIRGINIA DIGNUM:

Thank you very interesting question. I think that in my work, I try to take a participatory and diverse approach. It is important to realise that each one of us is only a part of the truth, and only a part of the overall understanding of the problem. We can understand the problem much better if we bring many different types of voices into the discussion. I like to talk more about inclusion in general than just about gender-based inclusion. I think it's also very important to include different cultures, different disciplines, and the groups that are usually considered or estimated to be less worthy of inclusion for participation in the conversation. And that can take many different forms. Of course, the gender issue is one which we cannot forget. And in technology and engineering in general it’s still a very important part of the discussion, a very, very important part of what is missing in the development of a conversation about different cultures, different disciplines, different groups, different ages, different kinds of diversity. I think that's the key word here, we really need to take a diverse and inclusive approach to the design and development and understanding of these systems.


KERRY MACKERETH:

Fantastic. Something else that we're really interested in, that your work touches on quite a lot, is the idea of ‘human-AI ecosystems’, the kinds of relationships and partnerships that arise when humans and AI work together, for example in the context of customer service chatbots. So, what do you think constitutes good human AI, culture and relationships? And conversely, what are the ethical risks of human AI ecosystems?


VIRGINIA DIGNUM:

Thank you. Human-AI is indeed a central part of the work that I do, human-centred AI we like to call it. It is about understanding that the AI system is the tool, it’s the instrument that can be used by humans, by society in general, to improve our own wellbeing, our own condition, let's say. So it is a partnership in which we humans are in the driver's seat, and where we use the technology to support our individual and our societal wellbeing as much as possible. So in this sense, it is important to build the systems to adapt to what the user needs, what society’s needs are. So it's important to build these systems with the understanding that what is the best for me individually is not always what is the best for society in general, it might not be the best for all of us all together. So collective and individual perspectives need to be taken into account in this ecosystem between AI systems and humans. And one of the key risks is that we rely too much on or are too easily nudged by these types of systems. We are too easily lulled into laziness because the systems are going to make more and more of the decisions that we should be taking ourselves. So it's important to create a balance between making activities easier - and a lot of these chatbots or ledgering systems can help us do this by providing and guiding us with the best possible information, guiding us to the best possible activities and actions. But again, here, what is best for us is something which is not always what we think is best, we might want to have something but that's not necessarily what is the best. So we really need to be open and accepting of the balance between what is good for us and what we think we would like. And indeed, the risk here is that we build systems that make us too complacent and too lazy. And we really need to find a good balance between our own wellbeing, and the ease of decision-making that these systems can bring.


ELEANOR DRAGE:

Something that Kerry and I think about a lot is how people should be educated about AI in order to equip themselves to interact well with AI in these contexts, both in the corporate sector, and in society at large. So what do you think are appropriate ways of exploring how people interact, negotiate, trust, and cooperate with autonomous cognitive entities in these different settings?


VIRGINIA DIGNUM:

Yeah, good question. I think that central to this question is to understand that AI is not magic. AI is not the solution to all our problems. No technology is a solution to all our problems. I actually just recently read a quote by Laurie Anderson, in which he says, when we think that technology will be the solution to our problems, we don't really understand technology, but we also don't really understand our problems. So it's good to realise that an awareness of AI involves understanding that there are limits to what AI can do for us. And there should be some limits in how we apply AI. So the first part of this awareness, of this education, is to understand what these systems can really do and how they work. And I really don't mean that everybody needs to be able to build an AI system, or even to understand the technology behind them fully. But we really need to understand that what we are currently calling AI systems are basically systems that are able to identify patterns in huge amounts of data. And that is very useful in many types of decisions. But it's only one way to look at intelligence. The fact that we identify patterns in data doesn't mean that the system understands what that pattern is. A system can be very good at identifying cats in pictures, but it still will have no idea what a cat is. So we have to realise that this is the level of the systems that are making decisions on our behalf. And we really need to provide this awareness to everybody, both to corporations and to the public. Next, we really have to work on the other side. And by that I mean the education of the engineers and the experts that are building these systems. Again the point is to bring into the education of computer scientists and engineers the necessary understanding of the societal and human impact of the technologies that they are building. Technologies are not good or bad, but they are not valueless. The way that we build technologies involves by definition and by necessity incorporating the values that we as engineers adhere to. This happens even if we are not aware that we are implicitly building the values that we deem important into technology, or we are not really aware of the potential impact of this technology. So on the one hand, we have to help everybody understand that AI is not magic. And on the other extreme, the other side of the spectrum, we have to help engineers understand that what they are doing has social, ethical, and societal impact.


KERRY MACKERETH:

That's so fascinating. I think we have time for one more question: I'm really interested in what you say about technology not being good or bad, but also not being valueless, and I was wondering, when do you think there's a case for refusing AI? Do you think there's certain circumstances where we should just say, actually, either this particular technology is not working for us, it's not contributing to human good, or do you think there's certain circumstances where we say AI is just not the right tool here to be used? Or do you think there are always ways of recuperating AI so to speak, of improving it and making it better?


VIRGINIA DIGNUM:

As an AI researcher, yes, I would believe that there are ways to improve AI, otherwise I wouldn't be doing the right thing as a researcher in AI. But I do think that there is a need for a broader societal discussion about the limits and the conditions under which we want to use these types of systems. I would say, there is a lot of discussion about which areas we should or should not use AI. But from my perspective, the moment that we use a system to take decisions about someone's life, small or big decisions that are impacting someone's life, and we are using a technology wherein we don’t fully understand the process by which it delivers an output, then we should really start thinking whether this is the technology that we want to use. If a system decides on your credit score, on your court case, or on your medical diagnosis, and experts are not able to understand why it is suggesting a particular decision, then we really need to be thinking about whether this is a good technology to use here, whether we really should be taking the decisions of the system blindly into society. Or, should we scratch our heads before we use these types of systems, independent of the area or sector in which these systems are being applied.

2


KERRY MACKERETH:

Amazing, thank you so much, it has been such a joy to have you with us and discuss all the phenomenal work that you're doing and have been doing for such a long time on good and responsible AI. So we just want to say thank you again for joining us. We really appreciate it.


VIRGINIA DIGNUM:

Thank you very much, it was my pleasure. Thank you


ELEANOR DRAGE:

This episode was made possible thanks to our generous funder, Christina Gaw. It was written and produced by Dr Eleanor Drage and Dr Kerry Mackereth, and edited by Laura Samulionyte.




31 views0 comments
bottom of page