In response to the rapid pace of innovation in generative AI, the Biden-Harris Administration is taking action to promote responsible development in AI designed to protect privacy, security and the economy, including a $140 million investment in responsible AI research, assessments of existing AI systems and new regulations.
The White House’s announcement comes as Vice President Kamala Harris and other senior officials meet with leaders from four companies developing generative AI systems: Alphabet (Google), Anthropic, Microsoft and ChatGPT, creators of OpenAI.
The action comes as Microsoft, Google, Salesforce and dozens of other leading IT providers are integrating generative AI tools into their solutions, leading to a blistering pace of innovation in the space that has some concerned about privacy, security and economic impacts.
To help encourage the responsible development of AI, the National Science Foundation is funding $140 million to launch seven new National AI Research Institutes, bringing the total number of Institutes to 25 across the country.
According to the White House’s fact sheet, the Institutes pursue and encourage transformative AI advances that are ethical, trustworthy, responsible, and serve the public good while bolstering the country’s AI research and development infrastructure. Specifically, the new Institutes will help guide AI development in climate, agriculture, energy, public health, education and cybersecurity.
Blueprint Series: ChatGPT and Generative AI in the Workplace
The Administration also wants more eyes on the existing AI offerings from Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI and is launching an independent commission from those companies to participate in a public evaluation of AI systems.
AI Development Regulation Efforts
According to the White House, these efforts build on steps the Administration has taken to promote responsible AI development, including the Blueprint for an AI Bill of Rights and related executive actions announced last fall, as well as the AI Risk Management Framework and a roadmap for standing up a National AI Research Resource released earlier this year.
By opening up those AI systems to further scrutiny, the Administration hopes the models can be evaluated by a community of AI experts to explore how the models align with the Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework.
“This independent exercise will provide critical information to researchers and the public about the impacts of these models and will enable AI companies and developers to take steps to fix issues found in those models,” the Administration’s fact sheet says. “Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation.”
The Office of Management and Budget is also releasing this summer draft policy guidance on the use of AI systems by the government and is soliciting public comment on those proposed rules.
According to the White House, other steps taken include an Executive Order that directs federal agencies to root out bias in their design and use of AI technologies, other actions to protect citizens from AI-related harms, and steps taken to address national security concerns raised by AI in cybersecurity, biosecurity and safety.
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!
Leave a Reply