Artificial intelligence research organization OpenAI is releasing an Application programming interface for accessing new AI models developed by OpenAI, giving businesses access to its first commercial product and general purpose AI models.
Most AI systems are designed for one use case, but OpenAI’s release is intended to provide a general-purpose “text in, text out” interface that allows users to try it on virtually any English language task, the company said in an announcement.
Organizations can now request access to integrate the API into their product, develop a new application or help OpenAI explore the technology, the organization said.
OpenAI explains how it works in the announcement post:
Given any text prompt, the API will return a text completion, attempting to match the pattern you gave it. You can “program” it by showing it just a few examples of what you’d like it to do; its success generally varies depending on how complex the task is. The API also allows you to hone performance on specific tasks by training on a dataset (small or large) of examples you provide, or by learning from human feedback provided by users or labelers.
We’ve designed the API to be both simple for anyone to use but also flexible enough to make machine learning teams more productive. In fact, many of our teams are now using the API so that they can focus on machine learning research rather than distributed systems problems. Today the API runs models with weights from the GPT-3 family with many speed and throughput improvements. Machine learning is moving very fast, and we’re constantly upgrading our technology so that our users stay up to date.
The API access is being launched in a public beta rather than general availability to essentially make sure the technology isn’t used for harmful use cases.
OpenAI says since intelligent technology is progressing quickly, both positive and negative applications of AI are being developed with haste. The organization said it will terminate API access for obviously harmful use cases like harassment, spam, radicalization or astroturfing.
But we also know we can’t anticipate all of the possible consequences of this technology, so we are launching today in a private beta rather than general availability, building tools to help users better control the content our API returns, and researching safety-relevant aspects of language technology (such as analyzing, mitigating, and intervening on harmful bias). We’ll share what we learn so that our users and the broader community can build more human-positive AI systems.