This Model

This is a partially continued pretrained Llama 3.1 8B LLM (using unsloth/Meta-Llama-3.1-8B). Training was done on 200k articles from Arabic Wikipedia 2023.
This is just a proof of concept demo and should never be used for production. The model was then used to for instruction fine-tuning for classical Arabic poetry generation (toy model: https://huggingface.co/akhooli/llama31ft).

Downloads last month
5
Safetensors
Model size
4.65B params
Tensor type
F32
BF16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for akhooli/llama31pretrained2

Adapters
1 model