I would have compared it to the fine-tuned version if it had been released under a truly open-source license. I think developers implementing LLMs care more about licensing than about the underlying details of the model.
Also t5-base is 220M params vs 3B params of stablelm, not really a fair comparison anyways.
Is it actually clear that license restrictions on the training data really do affect the model itself? I know OpenAI says you’re not supposed to use the output of GPT3/4 to train competing models, but that doesn’t strike me as legally enforceable. Most of the discussions I’ve actually seen where lawyers weigh in seem to argue that training these models is pretty clearly fair use and therefore any copyright restrictions on the training data don’t really affect the output. I suppose we won’t know until a case actually goes to court, but I think it’s kind of silly to preemptively say you can’t use these fine-tuned models commercially because of a probably-not-legally-enforceable restriction on some of the training data.
Copyright restrictions are not the only possible restrictions.
If OpenAI says you're allowed to use their service under certain conditions, but you violate the conditions, then what's your legal basis for using the service? Forget about copyright, think about breach of contract or even computer fraud and abuse.
But let’s say you used the OpenAI GPT4 service to generate training data for a new model. You then train your model using that generated training data. In theory OpenAI can ban you from continuing to use their API and maybe even sue you for breach of terms of service, but that doesn’t mean the model you created based on that generated data is somehow now illegal to use or distribute. You can still sell or give away that trained model and there’s nothing OpenAI can do about that.
Let’s take specifically the case of Alpaca, the Stanford team generated a finetuning training set using GPT 3.5. Maybe OpenAI could sue them for doing that. But now that the training set exists and is freely available, I’m not using OpenAI if I finetune a new model with that existing training set. I have no contract with OpenAI, I’m not using their service, and OpenAI does not have any copyright claim on the generated dataset itself. They have no legal claim against me being able to use that dataset to fine tune and release a model.
I disagree, they made the decision to use datasets with restrictive licensing, jumping the alpaca/gpt4all/sharegpt bandwagon.
They also chose to toot their horn about how open-source their models are, even though for practical uses half of their released models are not more open source than a leaked copy of LLaMa.
So just use their base model and fine-tune with a non-restrictive dataset (e.g. Databricks' Dolly 2.0 instructions)? You can get a decent LoRA fine-tune done in a day or so on consumer GPU hardware, I would imagine.
The point here is that you can use their bases in place of LLaMA and not have to jump through the hoops, so the fine-tuned models are really just there for a bit of flash…
Looks like you’re seeing the glass as half empty here. Not sure if arguing here was more time efficient than just running the eval on the other set of weights.
*I wish I understood these things well enough to not have to ask, but alas I’m just a basic engineer
Also t5-base is 220M params vs 3B params of stablelm, not really a fair comparison anyways.