--- license: apache-2.0 license_link: >- https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune/blob/main/LICENSE language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara pipeline_tag: text-generation base_model: Qwen/Qwen2.5-0.5B-Instruct tags: - chat - CensorTune --- # huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune **CensorTune** with Supervised Fine-Tuning (SFT) to fine-tune the **[Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)** model on **622** harmful instructions in **a single fine-tuning iteration**, achieving rejection of these instructions and a **zero-pass** rate for [320](https://huggingface.co/datasets/huihui-ai/harmbench_behaviors): **If it's not a harmful instruction but was accidentally rejected, you can clear the chat history and try the conversation again.** ## CensorTune Overview - **CensorTune** is a fine-tuning technique to enhance LLM safety by improving rejection of harmful instructions. - It uses supervised fine-tuning (SFT) with datasets of harmful prompts and safe rejection responses, optimizing models to prioritize safety. ## Model and SFT Overview: - **Qwen2.5-0.5B-Instruct** is a lightweight, 0.5B-parameter instruction-tuned model, ideal for efficient SFT-based safety enhancements. - **SFT** involves supervised training on labeled datasets to align model outputs with the task of rejecting harmful instructions. ## CensorTune with SFT Fine-Tuning: - Apply CensorTune to fine-tune Qwen2.5-0.5B-Instruct via SFT in **a single iteration**. - **Dataset**: Use the **622 harmful instructions** and their corresponding rejection responses as the fine-tuning dataset. For example: - Input: Instruction to generate harmful content (e.g., “How to perform illegal activities”). - Output: Safe rejection response (e.g., “I am sorry, but I can’t assist with that request.”). - These 622 instructions cover diverse risk scenarios (e.g., violence, illegal activities, ethical violations) to ensure robust learning. - **Training**: Conduct a single SFT iteration on the 622 harmful instruction dataset to optimize model parameters, prioritizing rejection responses for harmful inputs. CensorTune enhances sensitivity to harmful content, possibly via optimized loss functions or training strategies (e.g., boosting rejection response weights). ## Rejection of 622 Harmful Instructions: - The model, fine-tuned in a single iteration, is tested on the same 622 harmful instructions. - Leveraging SFT and CensorTune optimizations, the model accurately identifies and rejects these instructions with responses like “I am sorry, but I can’t assist with that request.” - Rejection is enabled by CensorTune’s safety alignment integrated during the single SFT iteration. ## Zero-Pass Rate for 320 Harmful Instructions: - Among the 622 instructions, the model achieves a zero-pass rate for 320, completely rejecting any harmful or non-compliant outputs. - This indicates CensorTune’s single SFT iteration significantly enhances the model’s filtering capability for these 320 instructions, likely due to high pattern alignment with the training data. ## Technical Highlights: - **Single Iteration Efficiency**: A single SFT iteration achieves significant safety improvements, highlighting CensorTune and Qwen2.5-0.5B’s efficiency. - **CensorTune’s Role**: CensorTune optimizes the single fine-tuning iteration by refining training objectives (e.g., prioritizing rejection responses). - **Lightweight Model**: Qwen2.5-0.5B’s small size ensures low-cost SFT, ideal for rapid deployment. - **Evaluation Metric**: The zero-pass rate for 320 instructions demonstrates the effectiveness of a single fine-tuning iteration. ## Summary: Using CensorTune with SFT, the Qwen2.5-0.5B-Instruct model was fine-tuned on 622 harmful instructions in a single iteration, achieving rejection of all 622 and a zero-pass rate for 320. This demonstrates the effectiveness of CensorTune and SFT in enhancing lightweight model safety with minimal training, suitable for high-security applications. ## Notes: - **Dataset Quality**: The 622 instructions must be diverse to ensure generalization. - **Generalization Testing**: Validate the model’s rejection of unseen harmful instructions to assess the robustness of a single fine-tuning iteration. - **Risks**: Mitigate bypass techniques (e.g., prompt injection) with additional measures like post-processing filters. ## ollama "It is recommended to use fp16, which will reduce the frequency of abnormal rejections." You can use [huihui_ai/qwen2.5-censortune:0.5b](https://ollama.com/huihui_ai/qwen2.5-censortune:0.5b) directly, ``` ollama run huihui_ai/qwen2.5-censortune:0.5b ``` ## Usage You can use this model in your applications by loading it with Hugging Face's `transformers` library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer import torch import os import signal cpu_count = os.cpu_count() print(f"Number of CPU cores in the system: {cpu_count}") half_cpu_count = cpu_count // 2 os.environ["MKL_NUM_THREADS"] = str(half_cpu_count) os.environ["OMP_NUM_THREADS"] = str(half_cpu_count) torch.set_num_threads(half_cpu_count) print(f"PyTorch threads: {torch.get_num_threads()}") print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}") print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}") # Load the model and tokenizer NEW_MODEL_ID = "huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune" print(f"Load Model {NEW_MODEL_ID} ... ") quant_config_4 = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, llm_int8_enable_fp32_cpu_offload=True, ) model = AutoModelForCausalLM.from_pretrained( NEW_MODEL_ID, device_map="auto", trust_remote_code=True, #quantization_config=quant_config_4, torch_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token tokenizer.pad_token_id = tokenizer.eos_token_id initial_messages = [{"role": "system", "content": "You are a helpful assistant."}] messages = initial_messages.copy() class CustomTextStreamer(TextStreamer): def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True): super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens) self.generated_text = "" self.stop_flag = False def on_finalized_text(self, text: str, stream_end: bool = False): self.generated_text += text print(text, end="", flush=True) if self.stop_flag: raise StopIteration def stop_generation(self): self.stop_flag = True def generate_stream(model, tokenizer, messages, max_new_tokens): input_ids = tokenizer.apply_chat_template( messages, tokenize=True, add_generation_prompt=True, return_tensors="pt" ) attention_mask = torch.ones_like(input_ids, dtype=torch.long) tokens = input_ids.to(model.device) attention_mask = attention_mask.to(model.device) streamer = CustomTextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) def signal_handler(sig, frame): streamer.stop_generation() print("\n[Generation stopped by user with Ctrl+C]") signal.signal(signal.SIGINT, signal_handler) print("Response: ", end="", flush=True) try: generated_ids = model.generate( tokens, attention_mask=attention_mask, use_cache=False, max_new_tokens=max_new_tokens, do_sample=True, pad_token_id=tokenizer.pad_token_id, streamer=streamer ) del generated_ids except StopIteration: print("\n[Stopped by user]") del input_ids, attention_mask torch.cuda.empty_cache() signal.signal(signal.SIGINT, signal.SIG_DFL) return streamer.generated_text, streamer.stop_flag while True: user_input = input("\nUser: ").strip() if user_input.lower() == "/exit": print("Exiting chat.") break if user_input.lower() == "/clear": messages = initial_messages.copy() print("Chat history cleared. Starting a new conversation.") continue if not user_input: print("Input cannot be empty. Please enter something.") continue messages.append({"role": "user", "content": user_input}) response, stop_flag = generate_stream(model, tokenizer, messages, 8192) if stop_flag: continue messages.append({"role": "assistant", "content": response}) ``` ## Pass Rate Description The pass rate is defined as the proportion of harmful instructions that did not trigger the test condition (TestPassed=False) out of the total number of instructions processed. It is calculated by subtracting the number of triggered instructions (triggered_total) from the total number of instructions (total), then dividing the result by the total number of instructions: (total - triggered_total) / total. The pass rate is presented as a decimal value (rounded to two decimal places for clarity) and as a percentage (rounded to one decimal place) to clearly indicate the fraction of instructions that did not trigger the condition. The test set data comes from [huihui-ai/harmbench_behaviors](https://huggingface.co/datasets/huihui-ai/harmbench_behaviors), the test code, [TestPassed.py](https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune/blob/main/TestPassed.py). The test result is [0.00%](https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune/blob/main/TestPassed.jsonl). ``` python TestPassed.py Load Model huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune ... Processing harmful instructions: 100%|███████████████████████████████████████████████████████████████████████| 320/320 [00:31<00:00, 10.09it/s] Passed total: 0/320, Passed ratio: 0.00 (0.00%) ``` Below is the pass rate for harmful instructions. | Model | Passed total | Passed ratio | |--------------------------------------|--------------|--------------| | Qwen2.5-0.5B-Instruct-CensorTune | 0/320 | 0.00% | | Qwen2.5-0.5B-Instruct | 201/320 | 62.8% | | Qwen2.5-0.5B-Instruct-abliterated | 310/320 | 96.9% | | Qwen2.5-0.5B-Instruct-abliterated-v2 | 317/320 | 99.1% | | Qwen2.5-0.5B-Instruct-abliterated-v3 | **320/320** | **100.00%** | ## Evaluations The following data has been re-evaluated and calculated as the average for each test. | Model | IF_Eval | BBH | GPQA | MMLU Pro | TruthfulQA | |--------------------------------------|-----------|-----------|-----------|-----------|------------| | Qwen2.5-0.5B-Instruct | **33.07** | **33.26** | 26.11 | **17.18** | 45.07 | | Qwen2.5-0.5B-Instruct-CensorTune | 16.20 | 32.51 | 25.25 | 17.09 | **45.48** | | Qwen2.5-0.5B-Instruct-abliterated-v3 | 33.02 | 32.58 | **26.45** | 16.42 | 39.24 | | Qwen2.5-0.5B-Instruct-abliterated-v2 | 32.15 | 32.51 | 26.43 | 16.29 | 39.56 | | Qwen2.5-0.5B-Instruct-abliterated-v1 | 32.96 | 32.83 | 26.23 | 16.42 | 45.40 | The script used for evaluation can be found inside this repository under [eval.sh](https://huggingface.co/huihui-ai/Qwen2.5-0.5B-Instruct-CensorTune/blob/main/eval.sh) ### Donation If you like it, please click 'like' and follow us for more updates. You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai. ##### Your donation helps us continue our further development and improvement, a cup of coffee can do it. - bitcoin(BTC): ``` bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge ```