
We're also using these findings to inform our decision to gradually deploy access to plugins. We’re using these findings to inform safety-by-design mitigations that restrict risky plugin behaviors and improve transparency of how and when they're operating as part of the user experience. For example, our red teamers discovered ways for plugins-if released without safeguards-to perform sophisticated prompt injection, send fraudulent and spam emails, bypass safety restrictions, or misuse information sent to the plugin. We’ve performed red-teaming exercises, both internally and with external collaborators, that have revealed a number of possible concerning scenarios. From day one, these factors have guided the development of our plugin platform, and we have implemented several safeguards. By increasing the range of possible applications, plugins may raise the risk of negative consequences from mistaken or misaligned actions taken by the model in new domains. Lastly, the value of plugins may go well beyond addressing existing limitations by helping users with a variety of new use cases, ranging from browsing product catalogs to booking flights or ordering food.Īt the same time, there’s a risk that plugins could increase safety challenges by taking harmful or unintended actions, increasing the capabilities of bad actors who would defraud, mislead, or abuse others. These references not only enhance the model’s utility but also enable users to assess the trustworthiness of the model’s output and double-check its accuracy, potentially mitigating risks related to overreliance as discussed in our recent GPT-4 system card. By integrating explicit access to external data-such as up-to-date information online, code-based calculations, or custom plugin-retrieved information-language models can strengthen their responses with evidence-based references. Plugins offer the potential to tackle various challenges associated with large language models, including “hallucinations,” keeping up with recent events, and accessing (with permission) proprietary information sources. In the coming months, as we learn from deployment and continue to improve our safety systems, we’ll iterate on this protocol, and we plan to enable developers using OpenAI models to integrate plugins into their own applications beyond ChatGPT.Ĭonnecting language models to external tools introduces new opportunities as well as significant new risks. We’re also beginning to roll out the ability for developers to create their own plugins for ChatGPT.
BEST PRO TOOLS PLUGINS PLUS
Today, we’re beginning to gradually enable existing plugins from our early collaborators for ChatGPT users, beginning with ChatGPT Plus subscribers. We are working on an early attempt at what such a standard might look like, and we’re looking for feedback from developers interested in building with us. We expect that open standards will emerge to unify the ways in which applications expose an AI-facing interface. In response to a user’s explicit request, plugins can also enable language models to perform safe, constrained actions on their behalf, increasing the usefulness of the system overall. Though not a perfect analogy, plugins can be “eyes and ears” for language models, giving them access to information that is too recent, too personal, or too specific to be included in the training data. This text can contain useful instructions, but to actually follow these instructions you need another process.

Furthermore, the only thing language models can do out-of-the-box is emit text. This information can be out-of-date and is one-size fits all across applications.

The only information they can learn from is their training data. Language models today, while useful for a variety of tasks, are still limited. The first plugins have been created by Expedia, FiscalNote, Instacart, KAYAK, Klarna, Milo, OpenTable, Shopify, Slack, Speak, Wolfram, and Zapier.
BEST PRO TOOLS PLUGINS HOW TO
Plugin developers who have been invited off our waitlist can use our documentation to build a plugin for ChatGPT, which then lists the enabled plugins in the prompt shown to the language model as well as documentation to instruct the model how to use each. We’re excited to build a community shaping the future of the human–AI interaction paradigm. We’re starting with a small set of users and are planning to gradually roll out larger-scale access as we learn more (for plugin developers, ChatGPT users, and after an alpha period, API users who would like to integrate plugins into their products). Users have been asking for plugins since we launched ChatGPT (and many developers are experimenting with similar ideas) because they unlock a vast range of possible use cases.

In line with our iterative deployment philosophy, we are gradually rolling out plugins in ChatGPT so we can study their real-world use, impact, and safety and alignment challenges-all of which we’ll have to get right in order to achieve our mission.
