adding model card
Browse files
README.md
CHANGED
@@ -4,7 +4,6 @@ tags:
|
|
4 |
- quantized
|
5 |
- 4-bit
|
6 |
- AWQ
|
7 |
-
- DPO
|
8 |
- transformers
|
9 |
- pytorch
|
10 |
- mistral
|
@@ -17,11 +16,12 @@ tags:
|
|
17 |
- text-generation-inference
|
18 |
- finetune
|
19 |
- chatml
|
|
|
20 |
model-index:
|
21 |
- name: Ignis-7B-DPO-Laser
|
22 |
results: []
|
23 |
license: apache-2.0
|
24 |
-
base_model: mistralai/Mistral-7B-
|
25 |
language:
|
26 |
- en
|
27 |
quantized_by: Suparious
|
@@ -41,3 +41,84 @@ prompt_template: '<|im_start|>system
|
|
41 |
|
42 |
'
|
43 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- quantized
|
5 |
- 4-bit
|
6 |
- AWQ
|
|
|
7 |
- transformers
|
8 |
- pytorch
|
9 |
- mistral
|
|
|
16 |
- text-generation-inference
|
17 |
- finetune
|
18 |
- chatml
|
19 |
+
- DPO
|
20 |
model-index:
|
21 |
- name: Ignis-7B-DPO-Laser
|
22 |
results: []
|
23 |
license: apache-2.0
|
24 |
+
base_model: mistralai/Mistral-7B-instruct-v0.2
|
25 |
language:
|
26 |
- en
|
27 |
quantized_by: Suparious
|
|
|
41 |
|
42 |
'
|
43 |
---
|
44 |
+
|
45 |
+
# Ignis 7B DPO AWQ
|
46 |
+
|
47 |
+
- Model creator: [NeuralNovel](https://huggingface.co/NeuralNovel)
|
48 |
+
- Original model: [Ignis-7B-DPO-Laser](https://huggingface.co/NeuralNovel/Ignis-7B-DPO-Laser)
|
49 |
+
|
50 |
+
![image/jpeg](https://i.ibb.co/C8jZ6FW/OIG3.jpg)
|
51 |
+
|
52 |
+
- Community Organization: [ConvexAI](https://huggingface.co/ConvexAI)
|
53 |
+
|
54 |
+
## How to use
|
55 |
+
|
56 |
+
### Install the necessary packages
|
57 |
+
|
58 |
+
```bash
|
59 |
+
pip install --upgrade autoawq autoawq-kernels
|
60 |
+
```
|
61 |
+
|
62 |
+
### Example Python code
|
63 |
+
|
64 |
+
```python
|
65 |
+
from awq import AutoAWQForCausalLM
|
66 |
+
from transformers import AutoTokenizer, TextStreamer
|
67 |
+
|
68 |
+
model_path = "solidrust/Ignis-7B-DPO-Laser-AWQ"
|
69 |
+
system_message = "You are Ignis, incarnated as a powerful AI."
|
70 |
+
|
71 |
+
# Load model
|
72 |
+
model = AutoAWQForCausalLM.from_quantized(model_path,
|
73 |
+
fuse_layers=True)
|
74 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path,
|
75 |
+
trust_remote_code=True)
|
76 |
+
streamer = TextStreamer(tokenizer,
|
77 |
+
skip_prompt=True,
|
78 |
+
skip_special_tokens=True)
|
79 |
+
|
80 |
+
# Convert prompt to tokens
|
81 |
+
prompt_template = """\
|
82 |
+
<|im_start|>system
|
83 |
+
{system_message}<|im_end|>
|
84 |
+
<|im_start|>user
|
85 |
+
{prompt}<|im_end|>
|
86 |
+
<|im_start|>assistant"""
|
87 |
+
|
88 |
+
prompt = "You're standing on the surface of the Earth. "\
|
89 |
+
"You walk one mile south, one mile west and one mile north. "\
|
90 |
+
"You end up exactly where you started. Where are you?"
|
91 |
+
|
92 |
+
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
|
93 |
+
return_tensors='pt').input_ids.cuda()
|
94 |
+
|
95 |
+
# Generate output
|
96 |
+
generation_output = model.generate(tokens,
|
97 |
+
streamer=streamer,
|
98 |
+
max_new_tokens=512)
|
99 |
+
|
100 |
+
```
|
101 |
+
|
102 |
+
### About AWQ
|
103 |
+
|
104 |
+
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
|
105 |
+
|
106 |
+
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
|
107 |
+
|
108 |
+
It is supported by:
|
109 |
+
|
110 |
+
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
|
111 |
+
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
|
112 |
+
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
|
113 |
+
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
|
114 |
+
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
|
115 |
+
|
116 |
+
## Prompt template: ChatML
|
117 |
+
|
118 |
+
```plaintext
|
119 |
+
<|im_start|>system
|
120 |
+
{system_message}<|im_end|>
|
121 |
+
<|im_start|>user
|
122 |
+
{prompt}<|im_end|>
|
123 |
+
<|im_start|>assistant
|
124 |
+
```
|