Open implementation llama 7B second stage pre-trained on Russian language. Not complete yet, about 200M tokens, achieves 3.5 perplexity on eval dataset
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support