Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
belyakoff
/
llama-3.2-3b-instruct-fine-tuned-gptq-8bit
like
2
Text Generation
Transformers
Safetensors
Vikhrmodels/GrandMaster-PRO-MAX
Vikhrmodels/Grounded-RAG-RU-v2
Russian
English
code
rag
question answering
conversational
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
51
Safetensors
Model size
1.13B params
Tensor type
I32
·
FP16
·
Inference Providers
NEW
Text Generation
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.
Model tree for
belyakoff/llama-3.2-3b-instruct-fine-tuned-gptq-8bit
Base model
meta-llama/Llama-3.2-3B-Instruct
Finetuned
(
226
)
this model
Datasets used to train
belyakoff/llama-3.2-3b-instruct-fine-tuned-gptq-8bit
Vikhrmodels/GrandMaster-PRO-MAX
Viewer
•
Updated
Oct 25, 2024
•
155k
•
370
•
60
Vikhrmodels/Grounded-RAG-RU-v2
Viewer
•
Updated
Dec 14, 2024
•
50.2k
•
67
•
12
Collection including
belyakoff/llama-3.2-3b-instruct-fine-tuned-gptq-8bit
Llama
Collection
3 items
•
Updated
Sep 30, 2024