Artificial intelligence and the potential dangers of allowing this technology to grow too big too quickly have been long debated. For decades popular sci-fi media brought to the light the potential risks of AI, but only in the past several years has the technology evolved in a way that the discussion has migrated to the real world. As a response to this rapid evolution of AI, the European Union has published a new framework to regulate the use of AI across the 27 member states.
The regulations cover a wide range of applications, from software in self-driving cars to algorithms used to vet job candidates, and arrive at a time when countries around the world are struggling with the ethical ramifications of artificial intelligence. Similar to the EU’s data privacy law, GDPR, the regulation gives the bloc the ability to fine companies that infringe its rules up to 6 percent of their global revenues, though such punishments are extremely rare.
The regulations focus on four specific AI use cases, including real-time remote biometric identification systems, creating social credit scores, causing physical or physiological harm to people, and manipulating behavior using subliminal cues.
The applications of AI are segmented into three levels of risk, including low-risk such as spam filters, limited-risk such as chatbots, and high-risk which include applications which affect the material aspects of people’s lives such as algorithms to assess credit score and AI tools that control critical machinery. The amount of regulation overseeing each level will vary, with the most oversight on high-risk applications and little changes to oversight of low-risk applications.
High-risk applications specifically will need to be indexed into an EU database. Low-risk and limited-risk applications will focus more on ensuring the user is aware any time they are interacting with a machine.
The full proposal, which can be found on the European Commission website, outlines more specifically how regulations could be passed to limit the risk of artificial intelligence. The focus isn’t on preventing a science fiction level extinction where robots turn on manking, but more on the inherent biases and potential pitfalls that artificial intelligence systems can be prone to.
When it comes down to it, AI is not nearly as far along as many outside of the field believe. Almost any AI application that exists today is more accurately classified as machine learning. The systems aren’t thinking the way a human does, but rather following paths that will lead to likely outcomes. This lack of sophistication is important to note before we hand over critical business functions to these systems.
In any case, IT professionals will want to keep an eye on this proposal and the potential regulations it leads to. If you develop an overarching AI system to be used globally, then at least in the EU you’ll need it to adhere to their standards.