Confronting AI's inclusion problem
The ethics of AI remain a hot topic as we begin the new decade, specifically surrounding algorithmic bias. Like a child to a parent, machine learning models mimic the behavior of their human creators who are prone to biased thinking. We’ve seen AI perpetuate and even amplify injustice in hospitals, courtrooms, and the workplace, and warning flags are up in higher education.
As this is happening, there’s significant research pursuing breakthroughs that would allow our AI to meet anti-discrimination standards and policy. Leidos data scientists are taking up this challenge by engineering models with greater transparency. For example, one particular program in the intelligence community sheds light on data classification decisions made within the inscrutable black box of deep neural networks. At last summer’s AI Palooza, Dr. Vicente Ordonez Roman presented his team’s award-winning research, sponsored by Leidos, which studied the mechanics of why machine learning amplifies gender stereotypes in image recognition and natural language processing.
How can we trust AI when it’s earned a healthy distrust in the past? To learn more we welcome Dr. Shirley Cavin, a forceful advocate for greater diversity and inclusion in the field. Later this month she’ll sit on a panel, hosted by Leidos and Scotland Women in Technology, to discuss important topics regarding ethics in AI.
Q: According to the headlines, AI is not working for everyone. What will it take to change this?
Dr. Cavin: The bottom line is we need more accountability. Often we don’t really know how our AI makes decisions. This might be okay with trivial matters, but definitely not when algorithmic bias causes discrimination or other harm to people. More broadly, we need to be aware of the damage our technology can cause. As technologists, researchers, and scientists, sometimes we get lost in the depth of the challenge itself. We become so focused on solving the problem that we don’t fully think through how it will be used. AI is powerful because of its potential to solve a lot of our complex problems, but when we include technology in human activities, we need to think about serious things like safety, security, reliability, and fairness. We can’t develop technology first and think about its consequences later. It’s both together. It’s important to note that while we want to promote responsible technology, we also don’t want these factors to kill innovation.
, Data Science Manager, Leidos UKWe can't develop technology first and think about its consequences later. It's both together.
Q: Where does your passion about this issue come from, and why should others care as deeply as you do?
Dr. Cavin: Inclusion is a societal issue which makes us all vested stakeholders. We don’t want to discard any member of our society. I guess my passion comes from my soft side. I love technology, but I also have a family. I’m a mother. So I try to bring all these things forward so that technology improves our lives. We can either do technology for the sake of technology, or we can do the much smarter thing and understand its consequences and impact on human life. I’ve listened to a lot of talks by researchers who are doing very, very interesting things with AI. But often when I raise questions around the consequences of that technology, these important questions go unanswered.
Q: How do machines become biased?
Dr. Cavin: AI is the type of technology that enables systems (software and hardware) to do activities that require “intelligence,” which is the ability to use information, knowledge, and experience to make decisions and perform actions to achieve a set of goals. AI aims to replicate human behavior and the human way of learning, which will occur within the context of the communities in which it exists. If those communities have forms of social and cultural bias or discriminatory attitudes towards certain people and ideas, the AI can easily adopt those as well and even amplify them. This type of bias might also occur by design. As AI systems are designed, the designers could bring their own personal bias in the design of a system, and a set of discriminatory preferences could make their way into the system.
Third, bias could occur as the result of the data that is used to train and test the AI system. If this data is not a close and true representation of the ecosystem where it’s intended to be used, the results may be biased toward a certain portion of the population. However, if this is overcome and a true data representation is used, this still may not guarantee the reduction or elimination of social and cultural bias. For example, in communities where minority sectors or views belong to small portions of the population, they will be uncommon in the data used for training and testing. All of these considerations, if not well-thought or understood, may lead to mistrust and concerns about using AI in our communities, which would be very unfortunate because of AI’s great potential to benefit society.
Q: Where have you seen the most progress?
Dr. Cavin: Frameworks that enable us to think farther are helpful when we develop programs, which is why I am quite pleased governments are taking the lead. For example, the European Commission is running an ethical framework pilot, a framework that we as part of technology driven organizations should follow. But leaving ethics issues only to ethics professionals isn’t enough. For me, it needs to be a collaborative environment where everyone is participating in these discussions. Our common goal should be to make AI safe and inclusive, but also to promote the technology and continued innovation.
Q: What are the ingredients required to build this trust?
Dr. Cavin: Accountability, transparency, and fairness. When something goes wrong, there should be a human in the loop, a responsible party who can take ownership and fix the problem. We must also be transparent about the system’s design, training, testing, and performance, which means we need to understand it ourselves. Finally we must promote fairness by enabling the use of data that’s representative of the population with no bias toward or against particular sectors of people, and by making sure results are valid and outcomes respect the whole ecosystem’s interests and the views of where it is intended to be used. Other key ingredients to build trust on AI systems are safety and sustainability. These are particularly important to consider before the deployment and public use of any AI solution, therefore efforts should be put in place to safeguard the well-being of the communities and their early AI adoption.
Note: Inclusion is one of six Leidos core values. We define inclusion as fostering a sense of belonging, welcoming all perspectives and contributions, and providing equal access to opportunities and resources for everyone. This series previously explored how we can build trust in decisions made by AI.