The Verge reports that Japanese telecom giant NTT East and startup Earth Eyes Corp has built an artificial intelligence security camera that could be the future of automated surveillance. This camera, called the AI Guardman, supposedly has the ability to spot “suspicious behavior” and report it to the store’s human workers so that they can keep an eye on “potential shoplifters.”
Using open-source software developed by Carnegie Mellon University, AI Guardman estimates body positions of shoppers and compares it with predetermined poses that are considered “suspicious.” If it detects anything, it will notify the shopkeepers through a connected app.
Several companies in Japan, China, and the United States are looking to develop new deep learning techniques that will enable cameras to analyze video footage more quickly and cheaply than ever before. Amazon and Nest are beginning to incorporate artificial intelligence into their home security systems.
NTT East says that AI Guardman will hit the shelves as early as late July, with the hopes to roll out sales to 10,000 stores over the next three years. It will be sold with an up-front price of around $2,150 and a monthly subscription fee of $40 for cloud support.
Naturally, this technology will be geared towards big business. Though they claim they will not “omit” small businesses, it’s hard to believe that a Bodega would spend upwards of 2,000 dollars on such technology to keep people from lifting a few bags of chips.
It is unlikely that this new artificial intelligence technology will be able to avoid some form of backlash. NTT has yet to publish statistics on it;s false positive rate, and the biggest concern from skeptics is, of course, how accurate these cameras will be in identifying shoplifters. The camera has supposedly had issues telling the difference between a shoplifter and an indecisive shopper picking up and putting down different items on the shelves. “Suspicious behavior” is both subjective and context dependant, so the camera will surely make some mistakes.
These mistakes could lead to forms of discrimination, though NTT East insists that the technology “does not find pre-registered individuals” and thus could not be discriminatory. If the camera is inherently bias to certain groups, the camera could mistake an innocent customer for a potential threat and send a shopkeeper to follow them around the store, which is not exactly ideal or improbable.
Despite the evident concerns, this kind of technology is just the tip of the iceberg. Automated surveillance is being developed for a variety of professional and personal uses, including technologies that can spot violent behaviors in crowds and the use of facial recognition technologies in law enforcement. Accuracy or fairness aside, it looks like we will be relying on computers for judgement soon enough.