With artificial intelligence expected to dominate the IT landscape in 2023, the U.S. Department of Commerce’s National Institute of Standards and Technology has released a new document designed to help manage the risks of designing, developing, deploying or using AI systems.
The Artificial Intelligence Risk Management Framework (AI RMF) follows a direction from Congress, and was developed with input from both public and private sectors. It is designed to help organizations adapt to the AI landscape as technologies continue to be developed by IT providers and deployed by organizations. However, it also aims to help protect organizations from potential harms posed by AI.
According to NIST, AI poses several risks. AI models are trained on data that can change over time, and they sometimes change significantly and unexpectedly, which can affect systems in ways that can be difficult to understand.
In addition, NIST says AI systems are “socio-technical,” which means that they are influenced by societal dynamics and human behavior.
Risks can emerge from the complex interplay those technical and societal factors, which can impact lives in situations ranging from experiences with online chatbots to the results of job and loan applications, NIST says.
“This voluntary framework will help develop and deploy AI technologies in ways that enable the United States, other nations and organizations to enhance AI trustworthiness while managing risks based on our democratic values,” says Deputy Commerce Secretary Don Graves. “It should accelerate AI innovation and growth while advancing — rather than restricting or damaging — civil rights, civil liberties and equity for all.”
The agency made no mention of the trending conversational AI technology such as ChatGPT, but the timing of the release–January 26–is convenient.
According to NIST, the framework is designed to help organizations think about AI and risk differently by promoting a change in institutional culture and encouraging organizations to approach AI with a new perspective.
The framework is divided into two parts: framing the risks related to AI and outlining the characteristics of trustworthy AI systems, and the core framework that describes four specific functions to help organizations address the risks of AI systems in practice. Those four functions are govern, map, measure and manage.
NIST says it worked with private and public sector partners for 18 months to develop the AI RMF, and the document reflects about 400 sets for formal comments from more than 240 organizations.
The agency also today released a companion voluntary AI RMF Playbook, which suggests ways to navigate and use the framework. In addition, NIST says it plans to launch a Trustworthy and Responsible AI Resource Center to help organizations put the AI RMF 1.0 into practice. The agency encourages organizations to develop and share profiles of how they would put it to use in their specific contexts.
The framework is part of the agency’s larger effort to cultivate trust in AI technologies, which will be necessary if the technology is to be accepted widely by society, according to Under Secretary for Standards and Technology and NIST Director Laurie E. Locascio.
“The AI Risk Management Framework can help companies and other organizations in any sector and any size to jump-start or enhance their AI risk management approaches,” Locascio says. “It offers a new way to integrate responsible practices and actionable guidance to operationalize trustworthy and responsible AI. We expect the AI RMF to help drive development of best practices and standards.”
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!
Leave a Reply