OpenAI is testing a version of GPT-4 that can ‘remember’ long conversations

OpenAI has built a version of GPT-4, its latest text-generating model, that can “remember” roughly 50 pages of content thanks to a greatly expanded context window.

That might not sound significant. But it’s five times as much information as the vanilla GPT-4 can hold in its “memory” and eight times as much as GPT-3.

“The model is able to flexibly use long documents,” Greg Brockman, OpenAI co-founder and president, said during a live demo this afternoon. “We want to see what kinds of applications [this enables].”

Where it concerns text-generating AI, the context window refers to the text the model considers before generating additional text. While models like GPT-4 “learn” to write by training on billions of examples of text, they can only consider a small fraction of that text at a time — determined chiefly by the size of their context window.

Models with small context windows tend to “forget” the content of even very recent conversations, leading them to veer off topic. After a few thousand words or so, they also forget their initial instructions, instead extrapolating their behavior from the last information within their context window rather than the original request.

Allen Pike, a former software engineer at Apple, colorfully explains it this way:

“[The model] will forget anything you try to teach it. It will forget that you live in Canada. It will forget that you have kids. It will forget that you hate booking things on Wednesdays and please stop suggesting Wednesdays for things, damnit. If neither of you has mentioned your name in a while, it’ll forget that too. Talk to a [GPT-powered] character for a little while, and you can start to feel like you are kind of bonding with it, getting somewhere really cool. Sometimes it gets a little confused, but that happens to people too. But eventually, the fact it has no medium-term memory becomes clear, and the illusion shatters.”

We’ve not yet been able to get our hands on the version of GPT-4 with the expanded context window, gpt-4-32k. (OpenAI says that it’s processing requests for the high- and low-context GPT-4 models at “different rates based on capacity.”) But it’s not difficult to imagine how conversations with it might be vastly more compelling than those with the previous-gen model.

With a bigger “memory,” GPT-4 should be able to converse relatively coherently for hours — several days, even — as opposed to minutes. And perhaps more importantly, it should be less likely to go off the rails. As Pike notes, one of the reasons chatbots like Bing Chat can be prodded into behaving badly is because their initial instructions — to be a helpful chatbot, respond respectfully and so on — are quickly pushed out of their context windows by additional prompts and responses.

It can be a bit more nuanced than that. But context window plays a major part in grounding the models. without a doubt. In time, we’ll see what sort of tangible difference it makes.

OpenAI is testing a version of GPT-4 that can ‘remember’ long conversations by Kyle Wiggers originally published on TechCrunch

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter