AI and algorithms are increasingly becoming part of our lives, influencing decisions about what we see on the internet to the likelihood of us committing crimes.
While the efficacy of AI in making our lives easier cannot be downplayed, there is a growing concern that undesirable human biases like racism, sexism etc. might be creeping into AI.
So what is causing this bias?
Is AI biased?
Racism in search results
In 2015, graphic designer Johanna Burai created the White Web Project in an effort to remove "whiteness" in Google search results.
On searching for images of hands on Google, Burai found that only white hands showed up in the top image search results.
Google maintained that the bias of its search engine algorithm was not a reflection of its "values".
The problem is getting recognized
"I think it's getting better... people see the problem. When I started the project people were shocked. Now there's much more awareness," said Johanna Burai.
Love Tech news?
Stay updated with the latest happenings.
Racism in facial recognition software
In November 2016, Joy Buolamwini, a postgraduate student at the Massachusetts Institute of Technology, launched the Algorithmic Justice League (AJL) in a bid to root out racism from algorithms.
While trying to use a facial recognition software, Buolamwini, who is dark skinned, found that it couldn't process her face till she put on a white mask.
Being white made it easier for computers to read
"I found that wearing a white mask, because I have very dark skin, made it easier for the system to work. It was the reduction of a face to a model that a computer could more easily read," said Joy Buolamwini.
Lack of diversity in the tech industry
Joy Buolamwini thinks that the biases reflected by algorithms are the result of a lack of diversity in the tech industry.
As of January 2016, Google reported that only 19% of its employees were women, and only 1% black.
In June 2016, Facebook reported that 17% of its employees were women, and only 1% black.
Microsoft also reported similar figures.
How lack of diversity affects algorithms
"If you test your system on people who look like you and it works fine then you're never going to know that there's a problem," Joy Buolamwini said, pointing out how lack of diversity affects algorithms.
Training data sets for AI are not diverse
It's well known that AI is as good as the data sets it is trained on.
Considering the lack of diversity in major tech companies which design these algorithms, it can be safely assumed that the biases reflected by AI and algorithms is because these softwares are trained on data sets which do not include a diverse range of people.
Inclusivity is imperative for a better future
"Any technology that we create is going to reflect both our aspirations and our limitations. If we are limited when it comes to being inclusive that's going to be reflected in the robots we develop or the tech that's incorporated within the robots," Buolamwini explained.
A possible solution to the problem
Suresh Venkatasubramanian, an associate professor at the University of Utah school of computing, says that the problem of human prejudice in AI can be solved.
First, it is imperative to have diverse data sets including all types of people.
Second, sharing best practice procedures among software vendors.
And finally, building algorithms which explain their decision making so biases can be identified and rectified immediately.