Ever since OpenAI released ChatGPT to the public late last year, the speed of innovation in the field of artificial intelligence–specifically generative AI–has been blistering. Tech companies are feverishly working to develop more intelligent systems and essentially add intelligent assistants to a wide range of tools to help users be more efficient and creative.
But for some, the pace of innovation is cause for alarm. That includes the Future of Life Institute, a nonprofit research group dedicated to the safe development of emerging technologies, which is calling on tech firms to pump the brakes on generative AI investments.
The group, backed by tech leaders such as Elon Musk, professors from MIT and Harvard, and other technology executives and researchers, says in an open letter that AI labs should immediately pause the training of AI systems more powerful than OpenAI’s GPT-4 for at least six months.
The letter is signed by more than 1,300 experts, including Musk, Apple Co-founder Steve Wozniak, former presidential candidate Andrew Yang, as well as data and AI experts from Google and Microsoft, two of the largest companies developing AI systems.
Both Google and Microsoft are beginning to integrate generative AI across their product portfolios, developing intelligent copilots to assist users in a wide range of tasks, such as content generation, marketing, sales, communications and cybersecurity. The companies plan to roll out AI assistants in widely used tools in their respective portfolios of business applications, and other solution providers are following suit.
On another level, the interest in generative AI and ChatGPT is astronomical, with ChatGPT beating out the largest social media platforms and other popular consumer apps for reaching 1 million users, achieving the feat in just five days last fall.
While AI research and development has been ongoing for years, its recent proliferation is cause for concern, the letter states. AI should be planned for and managed carefully, the organization says in the letter.
“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter states.
Essentially, the organization expresses concern that AI systems are becoming competitive with humans, and the benefit to humanity is dubious at best. Instead, the Future of Life contends that developing machines with the power to spread propaganda, automate a wide range of jobs, and outsmart and replace us should be scrutinized to the fullest extent–not just by “unelected tech leaders.”
While the letter was signed by AI and tech experts from leading organizations, signatories from OpenAI do not currently appear on the letter. The Future of Life uses OpenAI’s own words in its letter, citing a recent statement regarding AI about the need to, “[at] some point … get independent review before starting to train future systems” and limit the growth of more advanced AI models.
“We agree,” the letter states. “That point is now.”
In calling on AI labs to pause development of AI systems more advanced than GPT-4, the organization says governments should step in and institute a moratorium if the pause is not enacted quickly.
In addition, AI labs and experts should develop and implement a set of shared safety protocols for advanced AI design and development “development that are rigorously audited and overseen by independent outside experts” that ensure systems adhering to them are “safe beyond a reasonable doubt.”
“This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the Future of Life letter states.
In many cases of generative AI announcements, tech firms caution that the models are not human and are not perfect, and in many cases give inaccurate information, resulting in the need for human intervention. These systems have typically been announced as intelligent assistant or copilots, and have not yet been intended to replace a human.
To that end, Future of Life says tech firms need to focus on improving the safety and reliability of current models.
Instead, the tech industry should focus on making the currently available AI systems more “accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” the letter states.
In addition, AI developers should be working with policymakers to create AI governance systems that include regulatory authorities, oversight and tracking of AI systems, watermarking systems, liability of AI-caused harm, funding for AI safety research, and resources for coping with economic and political disruptions that AI will cause, the letter states.
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters!
Leave a Reply