Artificial Intelligence (AI) is rapidly transforming the world from the way we live to work, bringing unprecedented opportunities and challenges.
However, with these rapid advancements come concerns about the ethical, legal, and societal implications of AI, particularly with data and privacy protection. As such, policymakers will be grappling with how to regulate AI in a way that balances consumer protections while also fostering innovation.
The Positives of AI Regulation
The need for AI regulation is pressing for several reasons. Although, any regulatory framework will need to be flexible enough to accommodate the rapidly evolving nature of AI.
While AI creates efficiencies, it doesn’t have the capability to understand the full story and why certain patterns may appear, which can result in bias. There are many types of AI bias, including selection, confirmation and algorithmic bias. One example of AI bias is gender bias in job advertisements rendered through AI-powered recruiting tools. It has been found to exclude women from certain male-dominated fields as well as men from women-dominated fields including an ethnic bias for targeting certain types of roles tailored to certain populations. Businesses need to be aware of these biases and work towards eliminating them to ensure that AI is used ethically and fairly. AI can be as good as the completeness and accuracy of the data sets along with the algorithm which is being used to derive insight or prediction.
The right regulation also protects consumers’ data protection and privacy. This means governance around the collection, storage and use of data by AI systems. For example, we have seen this with marketing and consumers having to agree to how cookies from their site visits are used. For AI, this may involve requiring companies to disclose how the system works leveraging certain data and input, and how they use that to make decisions.
Looking at the life sciences industry, you are finding solutions that address personal health, so when you generate data, it must be in a way that is auditable by the FDA. For instance, AI-powered drug discovery tools may be biased toward certain classes of drugs and diseases based on historical data leading to the limited discovery of new therapeutics or treatments for less common diseases. The SEC does the same thing within the financial sector. AI-powered credit scoring or lending models may be biased against individuals from a certain demographic background leading to bias.
Given these concerns, policymakers are under increasing pressure to regulate AI. However, overregulation, which we’ve seen, can also cripple innovation.
The Negative Side of AI Overregulation
It’s human nature to be afraid of what could be.
Therefore, the cycle tends to begin with overregulation and then finds a balance as people become more comfortable with the technology’s capabilities. We saw some of this with cloud technologies early on. When cloud technologies first came out, people were concerned about trusting cloud technology architecture, who could see their information and how secure their data was. While not the case, it led to a lot of organizations moving slowly and we are just now starting to see the widespread adoption of cloud and cloud technologies.
AI systems rely on large amounts of data to learn and improve. With overly restrictive regulations, companies may struggle to collect the data they need to develop new applications and insights. This slows the development of new technologies and entices companies to locate in countries with less strict regulations for a competitive advantage.
One approach that can be used to strike the right balance between data protection and innovation is to implement a data minimization strategy. This practice involves collecting and processing only the data that is necessary for the specific task. By minimizing the amount of data collected, the risk of data breaches and misuses is greatly reduced while also protecting individuals’ privacy.
Striking the Right Balance
AI regulations are necessary to address the risks and challenges associated with AI. However, it is also important to strike the right balance and not stifle innovation. This will require more collaboration between policymakers, industry and civil society.
A key difference to keep in mind now is the speed at which adoption occurs. As we lean more into being a digital society, it typically occurs at a much faster rate. While this may cause panic and make regulators feel rushed to issue guidelines, poorly designed regulations could stifle innovation, hindering the development of new AI applications.
Douglas Vargo is VP of emerging technologies practice lead at IT & business consulting services firm CGI.
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!