Ok, this seems bunk basically because they never really provide evidence of "better".
> ... traditiontal gold-standard approaches use human evaluators that score the quality of generated responses, which can be costly. However, since chat AIs are by definition deployed in social environments with humans, one can leverage statistics of users interaction as a meaningful and aligned measure of chat AI engagingness and quality. To assess the ’quality’ of a chat AI, we consider two main proxy functions: the industry standard user retention and the main objective function, user engagement.
Maybe retention and engagement _are_ sufficiently well correlated to human evaluations, but you should probably do both and show that they're strongly correlated before you decide to just drop the human evaluators in favor of your cheap proxy measurements.
And in this field, where there are some known issues with chat LLMs, perhaps it's important to check stuff like:
- Does the model seem "engaging" just b/c the user has to refine their prompt several times before they get a satisfying response?
- Do responses include a lot of hallucinations which might be engaging but not true?
- Do successive responses show decreased consistency or coherence between messages, in a way that might accidentally elicit continued engagement?
Overall, it seems sloppy to believe that it's not a waste of humans time to talk to your chatbots, and it's not a waste of time for readers to look at this paper about your chatbots, but it's too expensive for you to actually measure the quality of responses from your chatbots.
They're making chatbots specifically for humans to waste time with them (a.k.a. entertainment.)
Engagement and user retention are directly connected to their bottom line in a way that quality responses (e.g. introducing you to a more fulfilling hobby than chatting with AIs) are not.
The papers title makes a stronger claim than the abstract. The abstract makes a stronger claim than the paper. It is like they couldn't decide what paper to write.
Edit: Thinking about it, this is exactly what you might expect from a paper written by a stochastic mix of experts.
Optimizing AIs to be addictive to humans is always how humanity will end. It was the natural end to social media, and market forces will force the same to happen in this industry.
People worry about the robot uprising killing all humans but never think about the far more likely AI domestication of humans.
They are presenting a real world use case where retention and engagement is clearly the metric of interest. It's not even clear what "human evaluations" would even mean in this context.
Kudos to not falling into the benchmark / human eval trap, and just testing your theories directly at scale in a deployment setting.
"Responses are selected randomly from a group of base chat AIs. ... The response generated by a specific chat AI is conditional on all previous responses generated by the previously selected chat AIs."
That's all? That works? Useful.
Could that be extended? It doesn't seem inherent in this that all the chat AIs have to be LLMs. Some might be special-purpose systems. Solvers or knowledge bases, such as Wolfram Alpha or a database front end, could play too. Systems at the Alexa/Siri level that can do simple tasks. Domain-specific systems with natural language in and out have been around for decades.
Why they aren't computing the next token marginal and sampling that? All I'm coming up with is that it's a reasonable way to work around dealing with different tokenizers.
This is considerably weirder than ensembling because they are not, in any sense, averaging the predictions or taking a majority vote or in some way collectively processing multiple models to yield a single slightly-better meta-model. They are just... randomly picking a model to use to generate the next response. And users find that more entertaining to talk to?
As there is no analysis of why that is better or evaluation of alternative approaches (what if you alternated A/B/A/B? Or cycled through them systematically A/B/C/A? or picked a different shuffle of A/B/C each time?), it's hard to say what this means.
My best guess is that this reflects the fact that GPT is, thanks to RLHF, boring. It has mode-collapse and does things like tell one of a handful of jokes every time. It will write a rhyming poem even if you ask it for a different kind of poem. And so on.
The random sampling of different models serves as a rather ad hoc way of avoiding the RLHF boringness. The various models might all be tuned similarly, but they won't yield identical results, and this sneaks in response diversity through the backdoor, undoing the same-ness from the RLHF mode collapse.
You used to be able to increase the sampling temperature on GPT to undo some of this blandness, but since RLHF flattens the logits in GPT-4, it's unclear if that still helps. So swapping in random models may be a useful trick. (Although fixing the tuning itself would be much more desirable.)
Not quite. The outputs of the models become part of the prompt history of all the models. So they can assist each other. For example, one model might produce a highly technical reply. Then the user can ask for explanations, to be provided by a model with better language capability but less domain expertise.
I really would like them to compare to Gpt4 instead of claiming victory when matching 3.5. To me GPT4 is the first usable one for a lot of professional uses. 3.5 is fun and gets some stuff right but it’s like a demo.
>the baseline models they test and blend are really terrible as well.
If the effect is there I would guess a few bad models should outperform a mediocre one, and a few mediocre ones should outperform a state-of-the-art one.
Of course it would be good to show the same again with GPT4 and maybe 3 GPT3.5 size models, but it's not necessary to show that such an effect exists, and maybe cost prohibitive for them as a research team. Now whether their methodology for proving this effect is correct is another discussion.
Personally I don't find these results surprising, our brain is also somewhat compartmentalized, why wouldn't the same hold for a good AI system?
The more difficult part is, how do you train these subnetworks optimally.
Is it weird to refer to GPT-3.5 as "state of the art" when GPT-4 is right there? Actually the paper uses davinci interchangeably with GPT-3.5 (sometimes without a hyphen) and ChatGPT.
So many people seem to confuse beating GPT-3.5 in general to be the hallmark. It's immediate hint they have no idea. There's a clear and vast difference between GPT-4 and 3.5, making GPT-3.5 almost worthless except for fast summarisation tasks perhaps.
You really haven't done much with those models if they seem remotely comparable.
To me GPT-3.5 can just summarise and provide general answers to questions, but GPT-4 can actually understand nuance and to me what seems to be reasoning.
The paper refers to ChatGPT as a 175B parameter LLM. This is almost certainly incorrect; the original largest version of GPT-3 was 175B but analysis of the speed and cost of the current model as well as public statements by OpenAI indicate it’s as much as 5-10x smaller.
It was mentioned to be a 20B in a comparison table in a paper co-written by Microsoft, but they've since claimed that it's just an error, and I mean, they'd need to be sitting on some really impressive distilling techniques to shrink a 175B model down to 20B with only a slight drop in performance.
OpenAI have been sitting on GPT4 for months, and on the basemodel even longer. I would not be surprised if they did some or all of the distillation of the model with GPT4.
Mixtral is 56B combined, if we subtract a little for MoE inefficiencies we could say that Mixtral is about 40B combined. This is a 2x increase over 20B. We have seen new models beat others twice the size.
That and a massive amount of excellent data for alignment should produce some great results.
I don't think it's out of the realm of possibility that 20B is real.
...But its not what this paper is describing. They are basically alternating models, AFAIK. Also I have other nitpicks with the paper, like using extremely old/mediocre chat models as bases:
When this is all settled I suspect we're going to find ourselves the drivers of a chariot, with each horse being an external and artificial mind given direction by our evolved needs.
Will to drive the chariot is different from intelligence. In the same way it’s different from strength and horses didn’t domesticate humans.
Of course if we make systems with a goal of dominating and are smarter than us and let it run for a while we could be in trouble, in the same way that we could be in trouble detonating a bunch of atom bombs, just maybe less obviously.
Now that I think about it, doesn't this "technique" triple the amount of compute and memory per generated token since each model needs to also compute and store the KV values for the two previous tokens it didn't generate and thus has never seen?
Edit: On second thought, depending on how it's actually implemented the other two tokens are probably ran through the model in parallel so it shouldn't be all that much slower.
It doesn’t generate three responses for every turn. It randomly picks a model for every response, the claim being that the switching between different models leads to better conversations because of the diversity of each model’s training.
Correct me if I'm wrong but usually when you do normal token by token inference in a transformer you store calculations made in the previous step in a KV cache so you can reuse it instead of calculating it all over again.
But here since the previous few tokens were produced by another model, the current model has never seen them and as such, by definition, doesn't have those calculations stored, but it still needs them to properly calculate attention for the new token.
It doesn’t appear to be token-by-token inference. Each new completion uses a different model, but the new completion is entirely created by that model.
Foundational models are designed to be universally applicable, covering a wide range of use cases. While it's relatively easy to tailor smaller models to specific scenarios through overfitting, when a model is overly specialized, it loses its broad applicability and ceases to be a foundational model.
> ... traditiontal gold-standard approaches use human evaluators that score the quality of generated responses, which can be costly. However, since chat AIs are by definition deployed in social environments with humans, one can leverage statistics of users interaction as a meaningful and aligned measure of chat AI engagingness and quality. To assess the ’quality’ of a chat AI, we consider two main proxy functions: the industry standard user retention and the main objective function, user engagement.
Maybe retention and engagement _are_ sufficiently well correlated to human evaluations, but you should probably do both and show that they're strongly correlated before you decide to just drop the human evaluators in favor of your cheap proxy measurements.
And in this field, where there are some known issues with chat LLMs, perhaps it's important to check stuff like:
- Does the model seem "engaging" just b/c the user has to refine their prompt several times before they get a satisfying response?
- Do responses include a lot of hallucinations which might be engaging but not true?
- Do successive responses show decreased consistency or coherence between messages, in a way that might accidentally elicit continued engagement?
Overall, it seems sloppy to believe that it's not a waste of humans time to talk to your chatbots, and it's not a waste of time for readers to look at this paper about your chatbots, but it's too expensive for you to actually measure the quality of responses from your chatbots.