Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama3-1_8b_oh_v3.1_wo_evolinstruct - GGUF - Model creator: https://huggingface.co/mlfoundations-dev/ - Original model: https://huggingface.co/mlfoundations-dev/llama3-1_8b_oh_v3.1_wo_evolinstruct/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q2_K.gguf) | Q2_K | 2.96GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q3_K.gguf) | Q3_K | 3.74GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q4_K.gguf) | Q4_K | 4.58GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q5_K.gguf) | Q5_K | 5.34GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q6_K.gguf) | Q6_K | 6.14GB | | [llama3-1_8b_oh_v3.1_wo_evolinstruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_evolinstruct-gguf/blob/main/llama3-1_8b_oh_v3.1_wo_evolinstruct.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-8B tags: - llama-factory - full - generated_from_trainer model-index: - name: llama3-1_8b_oh_v3.1_wo_evolinstruct results: [] --- # llama3-1_8b_oh_v3.1_wo_evolinstruct This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) on the mlfoundations-dev/oh_v3.1_wo_evolinstruct dataset. It achieves the following results on the evaluation set: - Loss: 0.6348 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - total_train_batch_size: 512 - total_eval_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.1 - lr_scheduler_warmup_steps: 1738 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6452 | 1.0 | 391 | 0.6435 | | 0.5983 | 2.0 | 782 | 0.6336 | | 0.5549 | 3.0 | 1173 | 0.6348 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.4.0 - Datasets 3.0.2 - Tokenizers 0.20.3