A team of researchers that includes a scientist of Indian origin have found out that Artificial Intelligence systems are capable of acquiring cultural, racial and gender biases when they're trained with ordinary human language available online.

The study published in Science found out that a machine learning algorithm without any supervision learns to associate black names as being more unpleasant than white names and female names more with family words than career words.

The team of scientists from United State's Princeton University started out by testing the bias of a common Artificial Intelligence model, and then decided to match the results furnished against a popular psychological test that measures bias in humans. The team then progressed to replicate in the algorithm (Global Vectors for Word Representation from Stanford University) all the psychological biases that they tested.

For the uninitiated, Global Vectors for Word Representation algorithm crawls the web to find data and learns associations between billions of words.

Nowadays, machine learning algorithms have become so common a thing that they influence everything right from scanning names on long lists to translation etc. The research found out that unfortunately the biases have also become pervasive. These biases range from the objectionable views of gender and race to the morally neutral, like a preference for flowers over insects.

As we become more and more dependent on computers for processing the natural language that human beings use to communicate, it is now more important than ever for the AI industry to address the possible bias in machine learning.

Voicing his concern, Arvind Narayanan, assistant professor at Princeton University says, "We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from."

According to experts, the solution to this problem is not to change the model as an AI's job is to capture the world as it is and unfortunately, our world is full of bias. If we change the way the algorithm functions, it will end up making it less effective. The best way out of this situation is for the humans to work from the other end and help in eliminating the bias present in the offline and online world. So, the next time you see that Google Translate ends up always translating "engineer" to "he" and "homemaker" to "she", take some time out to suggest a change.

[Top Image: CB Insights]
Advertisements

Post a Comment

Comment

Previous Post Next Post