A small AI company has developed a new way to work facial recognition, and might send privacy plummeting, the New York Times reports.
The company, Clearview AI, created an app based on images scraped from Facebook, YouTube, and other websites: when a photo of someone is taken and uploaded, the app enables the user to see public photos of that same person. Within the app, a directory clusters all of the scraped photos into vectors and sorts them into neighborhoods; when a user uploads a photo of a face, the app converts it into a vector and “shows all the scraped photos stored in the vector’s neighborhood, alongside the links that go to the sites which provided the gallery of photos.”
The app is currently big with police – it has been licensed to multiple security agencies, and has been utilized by law enforcement to track down criminals. The app helps law enforcement to solve crimes quickly; Indiana State Police was able to solve a case in 20 minutes by using the app, the New York Times reports. Police officers and investors in Clearview “predict that its app will eventually be made available to the public”
Infringing on the “P-Word”
While Clearview AI’s app is fast and fairly effective for catching bad guys–it identifies people correctly 75 percent of the time – the problem of privacy comes to the forefront. The New York Times says that the app is gaining attention because Clearview AI seems to be violating other websites’ terms of service, including the prohibition of scraping photos. The company keeps all of those photos in its database, so they can be indefinitely accessed (although the company says it is working on a tool that would let people request that their photos be removed).
As a result, the app seems to continue to fuel the fire that facial recognition technology will help Big Brother keep watch on everyone. Plus, the app is only accurate three-quarters of the time; the New York Times notes that there hasn’t been enough research conducted on how often the app is inaccurate, and what the resulting consequences would be. This reinforces the issue that some facial recognition can be biased, and it shows how some agencies are utilizing personal data without users’ consent.
With the app adding another layer of complexity to the world of facial recognition, industry experts cast a warning to the public: while it’s an interesting and helpful tool, greedy users may use it for personal gains instead of for the greater good. “I don’t see a future where we harness the benefits of face recognition technology without the crippling abuse of the surveillance that comes with it,” Woodrow Hartzog, a professor of law and computer science at Northeastern University, told the New York Times.