Why some Instagram filters don’t work in Texas, Illinois

On Balance with Leland Vittert

CHICAGO (NewsNation) — Instagram users in Texas and Illinois may not have access to some filters after a pair of lawsuits that has its parent company, Meta, concerned about heading back to court.

The core issue is whether using those filters gives Meta more access to users’ personal information than it warns them about.

Texas is suing Meta on claims they broke the Texas’ Capture or Use of Biometric Identifier Act by learning people’s faces and biometrics without their consent.

Illinois has what some consider the strictest biometric privacy law in the U.S. A recent $650 million class-action settlement over Facebook’s automatic facial tagging feature resulted in checks for $397 that went out to class members earlier this month.

But Meta says this is not facial recognition at all.

The filters use augmented reality, Meta says, which they claim does not identify people, just the shapes of facial features needed to trigger the effects. Common examples of the filters include adding dog ears or changing the color of your hair or eyes.

A Meta spokesperson told the Chicago Tribune the company was only taking these actions to “prevent meritless and distracting litigation under laws in these two states based on a mischaracterization of how our features work.”

Last year, Meta announced it would be shutting down its facial recognition software on Facebook that identifies faces in photos and videos.

Following the Texas lawsuit, Meta said in a statement to Adweek, “The technology we use to power augmented reality effects like avatars and filters is not facial recognition or any technology covered by the Texas and Illinois laws, and is not used to identify anyone.”

Meta has turned the feature back on for users in Texas and added an “opt-in” message when users try to apply a filter.

Even if this is not facial recognition, those concerned about privacy argue companies can acquire data from social media companies to improve their technology.

For example, Clearview AI uses photos it takes from social media to train their algorithms for more precise facial recognition matches, according to the company’s website.

But critics say even a system that’s right 90 percent of the time could lead to disastrous results for the other 10 percent.

“Right now, the government is allowed to use facial recognition software in order to charge people with crime and find people who they are then going to charge with crimes,” said data science professor Liberty Vittert during an appearance on Wednesday’s “On Balance with Leland Vittert.” “That’s wrong and that’s what needs to be banned immediately before more people’s rights are violated,” she said.

“Meta is not what we should be worried about. It’s all of these other companies that are using it and the police departments that is using it to then charge crimes,” she said.

© 1998 - 2022 Nexstar Media Inc. | All Rights Reserved.

Trending on NewsNation

Elections 2022

More Elections 2022