Algorithms are now a big part of how decisions are made. But, we need to think about whether they are fair.
Research shows that about 80% of machine learning models can have bias. This is especially true for groups that are often left out or treated unfairly.
For example, when it comes to AI facial recognition, there’s a big difference in accuracy. It makes mistakes 34% of the time when trying to recognize Black people, but only 1% of the time for White people.
Also, a study revealed that 47% of Americans feel that algorithms could make social inequalities even worse. This points out a real need for us to look closely at how we use technology and ensure it’s fair for everyone.
Algorithms are now a big part of how decisions are made. But, we need to think about whether they are fair.
Research shows that about 80% of machine learning models can have bias. This is especially true for groups that are often left out or treated unfairly.
For example, when it comes to AI facial recognition, there’s a big difference in accuracy. It makes mistakes 34% of the time when trying to recognize Black people, but only 1% of the time for White people.
Also, a study revealed that 47% of Americans feel that algorithms could make social inequalities even worse. This points out a real need for us to look closely at how we use technology and ensure it’s fair for everyone.