Next month, Amazon shareholders will vote on a proposal to ban the sale of facial recognition technology to governments. The vote comes after it beat out Amazon’s request to quash the proposal before it made it to the voting phase, Gizmodo says.
Earlier this month, industry and academic AI researchers crafted a letter to Amazon, urging the company to stop selling Rekognition, its facial recognition solution, to law enforcement. The letter claims that Rekognition’s error rates are high, especially when identifying dark skinned people and females. Additionally, an ACLU study found that the technology “falsely matched 28 members of Congress to mugshot photos,” with all of the false matches touching people of varying skin colors, especially people of color, Gizmodo says.
“Shareholders request that the Board of Directors prohibit sales of facial recognition technology to government agencies unless the Board concludes, after an evaluation using independent evidence, that the technology does not cause or contribute to actual or potential violations of civil and human rights,” the shareholder proposal reads.
According to Gizmodo, Amazon said that the researchers weren’t using Rekognition properly. Amazon’s board of directors opposes the proposal, and “says that in the two years since Rekognition went to market it’s never received a complaint about the product violating rights or being sold to repressive governments.”
Taking it a step further:
Based on the usage of Rekognition, there seems to be role-reversal conflict between companies and the government, and how each uses facial recognition technology. On multiple occasions, Amazon has claimed that users of Rekognition aren’t utilizing the technology “as instructed,” including law enforcement. However, this raises the question why a large technology company is calling the shots on how a solution is used, rather than instituted laws and privacy regulations.
Gizmodo says other tech companies are aware of this role-reversal, and are questioning it; for example, Brad Smith, Microsoft’s president and chief legal officer, called for more federal regulations of facial recognition software to protect human rights. “We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology,” Smith wrote in 2018. “As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.”
As a result, decision makers, especially those employed by large tech companies working with facial recognition solutions, should keep an eye on the Rekognition case, and watch how the proposal plays out. If the proposal passes, it could set a precedent for who facial recognition tech is targeted for, how it’s used, and who can use it.