Facebook to close face recognition system, delete data
Sent:
Up to date:

FILE – This March 29, 2018, the file image shows the Facebook logo on the screens of the Nasdaq MarketSite in New York’s Times Square. (AP Photo / Richard Drew, File)
Providence, RI (AP) – Facebook said it will shuts down its face recognition system and delete facial impressions of more than 1 billion people.
“This change will represent one of the biggest shifts in the use of face recognition in the history of technology,” said a blog post Tuesday from Jerome Pesenti, vice president of artificial intelligence for Facebook’s new parent company, Meta. “Its removal will result in the deletion of more than a billion people’s individual face recognition templates.”
He said the company was trying to weigh the positive uses of the technology “against growing societal concerns, especially as regulators have not yet provided clear rules.”
Facebook’s face-off follows a few busy weeks for the company. On Thursday, it announced a new name – Meta – for the company, but not the social network. The new name, it said, will help it focus on building technology for what it envisions as the next iteration of the Internet – the “metaverse”.
The company is also facing perhaps its biggest PR crisis to date after leaked documents from whistleblower Frances Haugen showed that it has known about the damage its products cause and often did little or nothing to mitigate them.
More than a third of Facebook’s daily active users have chosen to have their faces recognized by the social networking system. That is about 640 million people. But Facebook has recently begun to downsize its use of face recognition after introducing it more than ten years ago.
In 2019, the company ended its practice of using face recognition software to identify users’ friends on uploaded photos and automatically suggest that they “tag” them. Facebook was sued in Illinois over the roof proposal feature.
The decision “is a great example of trying to make product decisions that are good for the user and the business,” said Kristen Martin, professor of technology at the University of Notre Dame. She added that the move also demonstrates the strength of regulatory pressure, as the face recognition system has been the subject of harsh criticism for over a decade.
Researchers and privacy activists have spent years questioning the technology, citing studies that found it worked unevenly across boundaries of race, gender or age.
Concerns have also grown due to growing awareness of the Chinese government’s extensive video surveillance system, especially as it has been employed in a region home to one of China’s predominantly Muslim ethnic minority populations.
Some U.S. cities have relocated to ban the use of face recognition software by police and other municipal departments. In 2019, San Francisco became the first U.S. city to ban the technology, which has long alerted proponents of privacy and civil liberties.
At least seven states and nearly two dozen cities have restricted government use of the technology due to fears of violations of civil rights, racial bias and invasion of privacy. Debate on additional bans, limits and reporting requirements has been underway in about 20 state capitals this legislative session, according to data collected by the Electronic Privacy Information Center in May this year.
Meta’s recent cautious approach to face recognition follows decisions by other US technology giants such as Amazon, Microsoft and IBM last year to halt or halt their sales of face recognition software to the police, citing concerns about fake identifications and amid a wider US inventory of police and racial injustice.
In October, President Joe Biden’s Science and Technology Office launched a research mission to look at face recognition and other biometric tools used to identify people or assess their emotional or mental states and character.
European regulators and legislators have also taken steps to block law enforcement from scanning facial features in public spaces as part of broader efforts to regulate the most risky uses of artificial intelligence.
Facebook’s face scanning practice also contributed to the $ 5 billion fine and privacy restrictions imposed by the Federal Trade Commission in 2019. Facebook’s settlement with the FTC after the agency’s years of investigation included a promise to demand “clear and conspicuous” communication before people’s photos and videos were exposed to face recognition technology.

No comments: