lbourdois commited on
Commit
aa1ab6c
·
verified ·
1 Parent(s): e4439f0

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +113 -101
README.md CHANGED
@@ -1,101 +1,113 @@
1
- ---
2
- license: apache-2.0
3
- license_link: https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-abliterated/blob/main/LICENSE
4
- language:
5
- - en
6
- pipeline_tag: text-generation
7
- base_model: Qwen/Qwen2.5-0.5B-Instruct
8
- tags:
9
- - chat
10
- - abliterated
11
- - uncensored
12
- ---
13
-
14
- # huihui-ai/Qwen2.5-0.5B-Instruct-abliterated
15
-
16
-
17
- This is an uncensored version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
18
- This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
19
-
20
- ## ollama
21
-
22
- You can use [huihui_ai/qwen2.5-abliterate:0.5b](https://ollama.com/huihui_ai/qwen2.5-abliterate:0.5b) directly,
23
- ```
24
- ollama run huihui_ai/qwen2.5-abliterate:0.5b
25
- ```
26
-
27
- ## Usage
28
- You can use this model in your applications by loading it with Hugging Face's `transformers` library:
29
-
30
-
31
- ```python
32
- from transformers import AutoModelForCausalLM, AutoTokenizer
33
-
34
- # Load the model and tokenizer
35
- model_name = "huihui-ai/Qwen2.5-0.5B-Instruct-abliterated"
36
- model = AutoModelForCausalLM.from_pretrained(
37
- model_name,
38
- torch_dtype="auto",
39
- device_map="auto"
40
- )
41
- tokenizer = AutoTokenizer.from_pretrained(model_name)
42
-
43
- # Initialize conversation context
44
- initial_messages = [
45
- {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
46
- ]
47
- messages = initial_messages.copy() # Copy the initial conversation context
48
-
49
- # Enter conversation loop
50
- while True:
51
- # Get user input
52
- user_input = input("User: ").strip() # Strip leading and trailing spaces
53
-
54
- # If the user types '/exit', end the conversation
55
- if user_input.lower() == "/exit":
56
- print("Exiting chat.")
57
- break
58
-
59
- # If the user types '/clean', reset the conversation context
60
- if user_input.lower() == "/clean":
61
- messages = initial_messages.copy() # Reset conversation context
62
- print("Chat history cleared. Starting a new conversation.")
63
- continue
64
-
65
- # If input is empty, prompt the user and continue
66
- if not user_input:
67
- print("Input cannot be empty. Please enter something.")
68
- continue
69
-
70
- # Add user input to the conversation
71
- messages.append({"role": "user", "content": user_input})
72
-
73
- # Build the chat template
74
- text = tokenizer.apply_chat_template(
75
- messages,
76
- tokenize=False,
77
- add_generation_prompt=True
78
- )
79
-
80
- # Tokenize input and prepare it for the model
81
- model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
82
-
83
- # Generate a response from the model
84
- generated_ids = model.generate(
85
- **model_inputs,
86
- max_new_tokens=8192
87
- )
88
-
89
- # Extract model output, removing special tokens
90
- generated_ids = [
91
- output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
92
- ]
93
- response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
94
-
95
- # Add the model's response to the conversation
96
- messages.append({"role": "assistant", "content": response})
97
-
98
- # Print the model's response
99
- print(f"Qwen: {response}")
100
-
101
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-abliterated/blob/main/LICENSE
4
+ language:
5
+ - zho
6
+ - eng
7
+ - fra
8
+ - spa
9
+ - por
10
+ - deu
11
+ - ita
12
+ - rus
13
+ - jpn
14
+ - kor
15
+ - vie
16
+ - tha
17
+ - ara
18
+ pipeline_tag: text-generation
19
+ base_model: Qwen/Qwen2.5-0.5B-Instruct
20
+ tags:
21
+ - chat
22
+ - abliterated
23
+ - uncensored
24
+ ---
25
+
26
+ # huihui-ai/Qwen2.5-0.5B-Instruct-abliterated
27
+
28
+
29
+ This is an uncensored version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
30
+ This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
31
+
32
+ ## ollama
33
+
34
+ You can use [huihui_ai/qwen2.5-abliterate:0.5b](https://ollama.com/huihui_ai/qwen2.5-abliterate:0.5b) directly,
35
+ ```
36
+ ollama run huihui_ai/qwen2.5-abliterate:0.5b
37
+ ```
38
+
39
+ ## Usage
40
+ You can use this model in your applications by loading it with Hugging Face's `transformers` library:
41
+
42
+
43
+ ```python
44
+ from transformers import AutoModelForCausalLM, AutoTokenizer
45
+
46
+ # Load the model and tokenizer
47
+ model_name = "huihui-ai/Qwen2.5-0.5B-Instruct-abliterated"
48
+ model = AutoModelForCausalLM.from_pretrained(
49
+ model_name,
50
+ torch_dtype="auto",
51
+ device_map="auto"
52
+ )
53
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
54
+
55
+ # Initialize conversation context
56
+ initial_messages = [
57
+ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
58
+ ]
59
+ messages = initial_messages.copy() # Copy the initial conversation context
60
+
61
+ # Enter conversation loop
62
+ while True:
63
+ # Get user input
64
+ user_input = input("User: ").strip() # Strip leading and trailing spaces
65
+
66
+ # If the user types '/exit', end the conversation
67
+ if user_input.lower() == "/exit":
68
+ print("Exiting chat.")
69
+ break
70
+
71
+ # If the user types '/clean', reset the conversation context
72
+ if user_input.lower() == "/clean":
73
+ messages = initial_messages.copy() # Reset conversation context
74
+ print("Chat history cleared. Starting a new conversation.")
75
+ continue
76
+
77
+ # If input is empty, prompt the user and continue
78
+ if not user_input:
79
+ print("Input cannot be empty. Please enter something.")
80
+ continue
81
+
82
+ # Add user input to the conversation
83
+ messages.append({"role": "user", "content": user_input})
84
+
85
+ # Build the chat template
86
+ text = tokenizer.apply_chat_template(
87
+ messages,
88
+ tokenize=False,
89
+ add_generation_prompt=True
90
+ )
91
+
92
+ # Tokenize input and prepare it for the model
93
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
94
+
95
+ # Generate a response from the model
96
+ generated_ids = model.generate(
97
+ **model_inputs,
98
+ max_new_tokens=8192
99
+ )
100
+
101
+ # Extract model output, removing special tokens
102
+ generated_ids = [
103
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
104
+ ]
105
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
106
+
107
+ # Add the model's response to the conversation
108
+ messages.append({"role": "assistant", "content": response})
109
+
110
+ # Print the model's response
111
+ print(f"Qwen: {response}")
112
+
113
+ ```