We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Why Gender-Neutral Facial Recognition Will Change How We Look at Technology

Why Gender-Neutral Facial Recognition Will Change How We Look at Technology content piece image
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 3 minutes

When we look at the world, we don't always see clearly. We can be tripped up by our own personal biases, or a lack of information. Facial recognition algorithms may help us change that.

Facial recognition technology, made possible by advancements in artificial intelligence, has a wide range of potential uses — from allowing a worker to scan into the office using just their face to identifying how a person really feels.

The technology also faces steep challenges. Studies of facial recognition algorithms have found that developers don't always account for biases in data sets. In some cases, these facial recognition algorithms are reinforcing biases in fields like healthcare and law enforcement.

If more inclusive facial recognition technology is able to overcome these criticisms, however, it could completely change how we interact with technology.

The potential of facial recognition technology

Google recently made headlines when the company changed how its AI image-recognition algorithm looks at people. Instead of labeling someone "man" or "woman," the system now uses the word "person."

Google's decision is part of a broader pivot towards gender-neutrality in tech. Other companies have also made moves to create technology that challenges stereotypes and is more inclusive — like new gender-neutral voices for virtual assistants.

One reason that facial recognition technology is so exciting to technology experts right now is because these algorithms may have the potential to completely reshape how we interact with technology.

For example, one potential application of facial recognition technology could use its ability to detect patterns in facial expressions to tap into people's actual emotional states.

Some new facial recognition technologies may soon be able to strip away people's "poker faces" and get at how a person is really feeling. The tech could be used to detect when someone is lying or poised for violence — or it could also help, for example, a school counselor identify if a cheery student actually needs someone to talk to. These algorithms could look past the mask people put on, and more accurately identify when they need help but can't find a way to ask for it.

Some experts are already trying to make this tech a reality — like Poppy Crum, Chief Scientist at Dolby, who is using advanced sensors and AI to allow computers to detect users' emotional signals. She hopes that the new technology could create more compassionate machines. For example, devices like Alexa could be repurposed into more attentive healthcare helpers — one without the bias of a human, and that doesn't get tired or develop compassion fatigue.

It's easy to imagine these kinds of algorithms as part of a future where computers can look past bias and the masks that people put on. Then, those machines could pass on that information to others, helping reach past communication barriers.

The criticisms of facial recognition tech

However, facial recognition technology also faces significant challenges.

Because of how these programs learn — by identifying relationships and correlations between individual components in an existing data set — a sufficiently trained AI can only recreate reality. As a result, it's very easy for AI algorithms to replicate existing systemic bias.

For example, facial recognition algorithms that were being used in law enforcement were found to disproportionately target African Americans. Algorithms developed in Japan, China and South Korea tended to be much better at identifying East Asian faces than Caucasian faces. Another study found that a healthcare algorithm, designed to help doctors make decisions about medical care, prioritized the needs of white patients over those of black patients.

Multiple studies of Amazon's facial recognition algorithm have discovered the technology to be biased. In one of the most recent studies, the algorithm was found to consistently make mistakes when identifying an individual's gender of people if they were a woman or darker-skinned.

And Google's change to its facial recognition algorithm — while generally regarded as a positive move towards better treatment of gender issues in tech — only came in the wake of criticism. Before the change to use "person" for everyone, regardless of gender, the company's facial recognition algorithm tended to identify pictures of people cooking as women — even when the subject was a man.

Without the right oversight, it seems, existing bias will simply creep back into the system. Worse, these AI algorithms may even reinforce that bias if their users believe the algorithms are impartial.

With oversight, it's possible for developers to correct for these challenges and create algorithms that serve everyone. Implementing best practices — like establishing metrics for fairness, organizing more diverse developer teams and testing algorithms in real-world scenarios — can help. There are also toolsets designed to help developers identify bias in data that may be useful for some developers.

However, as these failures demonstrate, developers don't always take those extra steps.

Facial recognition may change how we think about technology

Facial recognition algorithms have the potential to reframe how we think about technology. Creating more gender-inclusive and compassionate tools could greatly expand the roles that technology can play in our lives. In the future, facial recognition technology could enable virtual counselor's assistants or healthcare helpers.

There's also a lot of room for error. If not properly managed, these facial recognition algorithms can easily be as biased as any person. Worse, the appearance of objectivity these algorithms can provide may even reinforce bias.

With the right oversight, it's possible for developers to avoid recreating biases in their algorithms and create future technology that is more inclusive. However, this will only be possible if the right steps are taken and the potential risks in the way of facial recognition technology are taken seriously.