Uploaded model

  • Developed by: sal076
  • License: llama 3.1
  • Finetuned from model : unsloth/meta-llama-3.1-8b-bnb-4bit

This a shit fintune quickly made as a proof of concept, This isn't supposed to be a useable model

Here is a updated better version, use this instead

Q4_K_M: https://huggingface.co/sal076/L3.1_RP_TEST3-Q4_K_M-GGUF

Q5_K_M: https://huggingface.co/sal076/L3.1_RP_TEST3-Q5_K_M-GGUF

Downloads last month
28
GGUF
Model size
8.03B params
Architecture
llama

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Dataset used to train sal076/L3.1_RP_test2