Comment on Meta asks the US government to block OpenAI’s switch to a for-profit
brucethemoose@lemmy.world 1 week agoIt’s not open source, but open weights, documented, relatively permissively licensed and all the inference/finetuning libraries for it are open source.
MCasq_qsaCJ_234@lemmy.zip 1 week ago
I understand, but Meta has the rights to Llama and at any time they can change that license to make it less open just to make more money.
Currently it is open weight to attract customers, because once there are no competitors they will start to squeeze them.
brucethemoose@lemmy.world 1 week ago
Also competition is stiff. Alibaba is currently handing their butts to them with Qwen 2.5. Deepseek (a Chinese startup), tencent and Mistral (French) are giving them a run for their money too, and there are even some that “continue train” their old weights.
MCasq_qsaCJ_234@lemmy.zip 1 week ago
And what are those examples of those who continue training old weights?
brucethemoose@lemmy.world 1 week ago
A small startup called Arcee AI actually “distilled” logits from several other models (Llama, Mistral) and used the data to continue train Qwen 2.5 14B (which itself is Apache 2.0). It’s called supernova medius, and it’s quite incredible for a 14B model… SOTA as far as I know, even with their meager GPU resources.
A company called upstage “expands” models to larger parameter counts by continue training them. Look up the SOLAR series.
And quite notably, Nvidia continue trained Llama 3.1 70B and published the weights as Nemotron 70B. It was the best 70B model for awhile, and may still be in some areas.
And some companies like Cohere continuously train the same model slowly, and offer it over API, but occasionally publish the weights to promote them.
brucethemoose@lemmy.world 1 week ago
No, they can’t, because you can just pull the git repo with the old license as use them as they were at the time of upload, just like any software on a git repository. And too many people have them downloaded to delete them from the internet.
There are also finetunes inheriting the old license, and those orga are not going to pull the weights.
MCasq_qsaCJ_234@lemmy.zip 1 week ago
And in that case, will the Llama fork be the same as the Meta fork? We are talking about AI that has a considerable development, companies would probably not participate because it is not an open source license and its clause limits in those aspects.
Also you have to think that if the new version of Llama with the new license is 3 times better than Llama with the previous license, do you really think that the community will continue to develop the previous version?
brucethemoose@lemmy.world 1 week ago
Llama has tons of commercial use even with its “non open” license, which is basically just a middle finger to companies the size of Google or Microsoft. And yes, companies would keep using the old weights like nothing changed… because nothing did. Just like they keep using open source software that goes through drama.
Honestly I have zero short term worries about this because the space is so fiercely competitive. Also much of the ecosystem (like huggingface and inference libraries) is open source and out of their control.
And if they go API only, honestly they will just get clobbered by Google, Claude, Deepseek or whomever.
In the longer term… transformers will be obsolete anyway.