Selene-1-Mini
Collection
15 items
โข
Updated
โข
9
๐ Playground | ๐ Technical report | ๐ป GitHub | ๐ Sign up for the API
This model was quantised into an 8-bit (W8A8) format using GPTQ and SmoothQuant from AtlaAI/Selene-1-Mini-Llama-3.1-8B
.
This was done using vLLM's llm-compressor library (https://docs.vllm.ai/en/stable/features/quantization/int8.html)
Refer to the original model card for more details on the model.
This quantisation was calibrated using a sample of 512 datapoints from the data used to train Selene-1-Mini. As a result, our quantised models show minimal performance degradation, losing <0.5% overall across benchmarks!
For reference, a GPTQ quantized 8-bit Llama-3.1-8B shows ~1.5% degradation across benchmarks.
Base model
meta-llama/Llama-3.1-8B