Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
joongi007
/
QI-neural-chat-7B-ko-DPO-GGUF
like
0
GGUF
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
Original model is
QuantumIntelligence/QI-neural-chat-7B-ko-DPO
quantized using
llama.cpp
### System: {System} ### User: {User} ### Assistant: {Assistant}
Downloads last month
21
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In
to view the estimation
2-bit
Q2_K
2.72 GB
3-bit
Q3_K_S
3.16 GB
Q3_K_M
3.52 GB
Q3_K_L
3.82 GB
4-bit
Q4_K_S
4.14 GB
Q4_0
4.11 GB
Q4_1
4.55 GB
Q4_K_M
4.37 GB
5-bit
Q5_K_S
5 GB
Q5_0
5 GB
Q5_1
5.44 GB
Q5_K_M
5.13 GB
6-bit
Q6_K
5.94 GB
8-bit
Q8_0
7.7 GB
View +1 variant
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Collection including
joongi007/QI-neural-chat-7B-ko-DPO-GGUF
MY-GGUF
Collection
11 items
โข
Updated
Oct 7, 2024
Evaluation results
Metadata error: specify a dataset to view leaderboard