Why dataset tag?
Why does this model have the dataset tag:
datasets:
- arcee-ai/EvolKit-20k
Is this a finetuned model? If so why haven't you uploaded the base model? Nobody can finetune on top of this if its an instruct model.
So the 130 fine tunes of llama 3.1 8b instruct just dont exist ?
Im realizing the model card is very misleading, and this isnt a new distillation of llama-3.1-405b. Its just another finetune of llama-3.1-8b-instruct. Like every other model
Is Distillation from 405b to 8b even possible?
(While keeping the model functioning and getting good results).
I think the dataset was created using a distalled 405b.
i believe it was first distilled from 405b using: https://github.com/arcee-ai/DistillKit
then EvolKit made the additional data to finetune after the distillation
There is nothing in the readme that in anyway gives the impression that this is somehow a new base model - I will, however, add a link to one of our announcement overviews.
@Crystalcareai bruh "Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture. It is a distilled version of the larger Llama-3.1-405B-Instruct model"
You literally say my model is distilled from llama-3.1-405b. Your wording is just very confusing. I see that you didnt meat to say it that way