OpenAI recently announced multiple new features for ChatGPT and other artificial intelligence tools during its recent developer conference. The upcoming launch of a creator tool for chatbots, called GPTs (short for generative pretrained transformers), and a new model for ChatGPT, called GPT-4 Turbo, are two of the most important announcements from the company’s event.
This isn’t the first time OpenAI has given ChatGPT a new model. Earlier this year, OpenAI updated the algorithm for ChatGPT from GPT-3.5 to GPT-4. Are you curious how the GPT-4 Turbo version of the chatbot will be different when it rolls out later this year? Based on previous releases, it’s likely the model will roll out to ChatGPT Plus subscribers first and to the general public later.
While OpenAI turned down WIRED’s request for early access to the new ChatGPT model, here’s what we expect to be different about GPT-4 Turbo.
New Knowledge Cutoff
Say goodbye to the perpetual reminder from ChatGPT that its information cutoff date is restricted to September 2021. “We are just as annoyed as all of you, probably more, that GPT-4’s knowledge about the world ended in 2021,” said Sam Altman, CEO of OpenAI, at the conference. The new model includes information through April 2023, so it can answer with more current context for your prompts. Altman expressed his intentions to never let ChatGPT’s info get that dusty again. How this information is obtained remains a major point of contention for authors and publishers who are unhappy with how their writing is used by OpenAI without consent.
Input Longer Prompts
Don’t be afraid to get super long and detailed with your prompts! “GPT-4 Turbo supports up to 128,000 tokens of context,” said Altman. Even though tokens aren’t synonymous with the number of words you can include with a prompt, Altman compared the new limit to be around the number of words from 300 book pages. Let’s say you want the chatbot to analyze an extensive document and provide you with a summary—you can now input more info at once with GPT-4 Turbo.
Better Instruction Following
Wouldn’t it be nice if ChatGPT were better at paying attention to the fine detail of what you’re requesting in a prompt? According to OpenAI, the new model will be a better listener. “GPT-4 Turbo performs better than our previous models on tasks that require the careful following of instructions, such as generating specific formats (e.g., ‘always respond in XML’),” reads the company’s blog post. This may be particularly useful for people who write code with the chatbot’s assistance.
Cheaper Prices for Developers
It might not be front-of-mind for most users of ChatGPT, but it can be quite pricey for developers to use the application programming interface from OpenAI. “So, the new pricing is one cent for a thousand prompt tokens and three cents for a thousand completion tokens,” said Altman. In plain language, this means that GPT-4 Turbo may cost less for devs to input information and receive answers.
Multiple Tools in One Chat
Subscribers to ChatGPT Plus may be familiar with the GPT-4 dropdown menu where you can select which chatbot tools you’d like to use. For example, you could pick the Dall-E 3 beta if you want some AI-generated images or the Browse with Bing version if you need links from the internet. That dropdown menu is soon headed to the software graveyard. “We heard your feedback. That model picker was extremely annoying,” said Altman. The updated chatbot with GPT-4 Turbo will pick the right tools, so if you request an image, for example, it’s expected to automatically use Dall-E 3 to answer your prompt.