4darsh-Dev
commited on
Commit
•
b613dae
1
Parent(s):
b473557
updated readme
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
language:
|
4 |
- en
|
5 |
library_name: peft
|
@@ -10,4 +10,14 @@ tags:
|
|
10 |
- llama-3-8b-autogptq
|
11 |
- meta
|
12 |
- quantized
|
13 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: other
|
3 |
language:
|
4 |
- en
|
5 |
library_name: peft
|
|
|
10 |
- llama-3-8b-autogptq
|
11 |
- meta
|
12 |
- quantized
|
13 |
+
---
|
14 |
+
# Model Card for 4darsh-Dev/Meta-Llama-3-8B-autogptq-4bit
|
15 |
+
|
16 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
17 |
+
|
18 |
+
This repo contains 4-bit quantized (using autogptq and peft) model of Meta's Meta-Llama-3-8B
|
19 |
+
|
20 |
+
## Model Details
|
21 |
+
|
22 |
+
- Model creator: [Meta](https://huggingface.co/meta-llama)
|
23 |
+
- Original model: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
|