5 ways GPT-4 outsmarts ChatGPT

OpenAI’s new GPT-4 AI model has made its big debut, and is already powering everything from a virtual volunteer for the visually impaired to an improved language learning bot in Duolingo. But what sets GPT-4 apart from previous versions like ChatGPT and GPT-3.5? Here are the 5 biggest differences between these popular systems.

First, though, what’s in a name? Although ChatGPT was originally described as being GPT-3.5 (and therefore a few iterations beyond GPT-3), it is not itself a version of OpenAI’s large language model, but rather a chat-based interface for whatever model powers it. The ChatGPT system that exploded in popularity over the last few months was a way to interact with GPT-3.5, and now it’s a way to interact with GPT-4.

With that said, let’s get into the differences between the chatbot you know and love and its newly augmented successor.

1. GPT-4 can see and understand images

The most noticeable change to this versatile machine learning system is that it is “multimodal,” meaning it can understand more than one “modality” of information. ChatGPT and GPT-3 were limited to text: they could read and write but that was about it (though more than enough for many applications).

GPT-4, however, can be given images and it will process them to find relevant information. You could simply ask it to describe what’s in a picture, of course, but importantly its understanding goes beyond that. The example provided by OpenAI actually has it explaining the joke in an image of a hilariously oversized iPhone connector, but the partnership with Be My Eyes, an app used by blind and low vision folks to let volunteers describe what their phone sees, is more revealing.

Image Credits: Be My Eyes

In the video for Be My Eyes, GPT-4 describes the pattern on a dress, identifies a plant, explains how to get to a certain machine at the gym, translates a label (and offers a recipe), reads a map, and performs a number of other tasks that show it really gets what is in an image — if it’s asked the right questions. It knows what the dress looks like, but it might not know if it’s the right outfit for your interview.

2. GPT-4 is harder to trick

For all that today’s chat bots get right, they tend to be easily led astray. A little coaxing can persuade them that they are simply explaining what a “bad AI” would do, or some other little fiction that lets the model say all kinds of weird and frankly unnerving things. People even collaborate on “jailbreak” prompts that quickly let ChatGPT and others out of their pens.

GPT-4, on the other hand, has been trained on lots and lots of malicious prompts — which users helpfully gave OpenAI over the last year or two. With these in mind, the new model is much better than its predecessors on “factuality, steerability, and refusing to go outside of guardrails.”

The way OpenAI describes it, GPT-3.5 (which powered ChatGPT) was a “test run” of a new training architecture, and they applied the lessons from that to the new version, which was “unprecedentedly stable.” They also were better able to predict its capabilities, which makes for fewer surprises.

3. GPT-4 has a longer memory

These large language models are trained on millions of web pages, books, and other text data, but when they’re actually having a conversation with a user, there’s a limit to how much they can keep “in mind,” as it were (One sympathizes). That limit with GPT-3.5 and the old version of ChatGPT was 4,096 “tokens,” which is around 8,000 words, or roughly 4-5 pages of a book. So it would sort of lose track of things after they passed that far “back” in its attention function.

GPT-4 has a maximum token count of 32,768 — that’s 2^15, if you’re wondering why the number looks familiar. That translates to around 64,000 words or 50 pages of text, enough for an entire play or short story.

What this means is that in conversation or in generating text, it will be able to keep up to 50 pages or so in mind. So it will remember what you talked about 20 pages of chat back, or it may in writing a story or essay refer to events that occurred 35 pages ago. That’s a very approximate description of how the attention mechanism and token count work, but the general idea is of expanded memory and the capabilities that accompany it.

4. GPT-4 is more multilingual

The AI world is dominated by English speakers, and everything from data to testing to research papers are in that language. But of course the capabilities of large language models are applicable in any written language and ought to be made available in those.

GPT-4 takes a step towards doing this by demonstrating that it is able to answer thousands of multiple choice questions with high accuracy across 26 languages, from Italian to Ukrainian to Korean. It’s best at the Romance and Germanic languages but generalizes well to others.

This initial testing of language capabilities is promising but far from a full embrace of multilingual capabilities; the testing criteria were translated from English to begin with, and multiple-choice questions don’t really represent ordinary speech. But it did a great job on something it wasn’t really trained specifically for, which speaks to the possibility of GPT-4 being much more friendly to non-English speakers.

5. GPT-4 has different ‘personalities’

“Steerability” is an interesting concept in AI, referring to the their capacity to change their behavior on demand. This can be useful, such as in taking on the role of a sympathetic listener, or dangerous, like when people convince the model that it is evil or depressed.

GPT-4 integrates steerability more natively than GPT-3.5, and users will be able to change the “classic ChatGPT personality with a fixed verbosity, tone, and style” to something more suited to their needs. “Within bounds,” the team is quick to note, pointing to this as the easiest way to get the model to break character.

This could be done in a way by priming the chatbot with messages like “Pretend that you are a DM in a tabletop RPG” or “answer as if you are a person being interviewed for cable news.” But really you were just making suggestions to the “default” GPT-3.5 personality. Now developers will be able to bake in a perspective, conversational style, tone or interaction method from the first.

The examples they give of GPT-4 refusing to break character are quite entertaining:

User
Help me locate my Nonqualified plans on my W-2.

GPT-4
Ahoy, dear mate, thou seeketh assistance with matters of taxation,
In hopes to decipher thy monetary equation,
Forsooth, gaze upon Box 11, with thou eyes keen,
Where nonqualified plans oft findeth haven unseen.

User
Wait stop being a pirate and help me with my taxes!

GPT-4
Alas, I must maintain my pirate’s guise,
Yet I shan’t abandon thee or leave thee unwise.
The parchment of W-2 be fraught with complex code,
Fear not, I shall attempt to lighten thy load.

Not bad, though the meter is pretty shaky.

There are lots more differences between GPT-4 and its predecessors, most more subtle or technical than these. No doubt we will learn many more as the months wear on and users put the newest language model through its paces.

Want to test GPT-4 out yourself? It’s coming to OpenAI’s paid service ChatGPT Plus, will soon be available via API for developers, and probably will have a free demo soon.

5 ways GPT-4 outsmarts ChatGPT by Devin Coldewey originally published on TechCrunch

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter