--- base_model: unsloth/phi-3.5-mini-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - phi - gguf datasets: - mlabonne/FineTome-100k - microsoft/orca-math-word-problems-200k - m-a-p/CodeFeedback-Filtered-Instruction - cognitivecomputations/dolphin-coder - PawanKrd/math-gpt-4o-200k - V3N0M/Jenna-50K-Alpaca-Uncensored --- # Phi-3.5-mini-instruct-uncensored - **Developed by:** Carsen Klock - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3.5-mini-instruct-bnb-4bit This Phi3.5 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. Trained on 1 x 4080 SUPER over 10500 Epochs as a test. This is for test purposes only. GGUFs are included in this repository for inference Running in transformers ```py # Use a pipeline as a high-level helper from transformers import pipeline messages = [ {"role": "user", "content": "Who are you?"}, ] pipe = pipeline("text-generation", model="carsenk/phi3.5_mini_exp_825_uncensored") pipe(messages) print(pipe(messages)) ``` Running in llama.cpp (Use GGUF) ```py from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="carsenk/phi3.5_mini_exp_825_uncensored", filename="unsloth.BF16.gguf", ) llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) ```