--- license: apache-2.0 base_model: allura-org/Q3-8B-Kintsugi library_name: mlx tags: - mergekit - axolotl - unsloth - roleplay - conversational - mlx datasets: - PygmalionAI/PIPPA - Alfitaria/nemotron-ultra-reasoning-synthkink - PocketDoc/Dans-Prosemaxx-Gutenberg - FreedomIntelligence/Medical-R1-Distill-Data - cognitivecomputations/SystemChat-2.0 - allenai/tulu-3-sft-personas-instruction-following - kalomaze/Opus_Instruct_25k - simplescaling/s1K-claude-3-7-sonnet - ai2-adapt-dev/flan_v2_converted - grimulkan/theory-of-mind - grimulkan/physical-reasoning - nvidia/HelpSteer3 - nbeerbower/gutenberg2-dpo - nbeerbower/gutenberg-moderne-dpo - nbeerbower/Purpura-DPO - antiven0m/physical-reasoning-dpo - allenai/tulu-3-IF-augmented-on-policy-70b - NobodyExistsOnTheInternet/system-message-DPO pipeline_tag: text-generation --- # soundTeam/Q3-8B-Kintsugi_mlx-4bpw This model [soundTeam/Q3-8B-Kintsugi_mlx-4bpw](https://huggingface.co/soundTeam/Q3-8B-Kintsugi_mlx-4bpw) was converted to MLX format from [allura-org/Q3-8B-Kintsugi](https://huggingface.co/allura-org/Q3-8B-Kintsugi) using mlx-lm version **0.25.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("soundTeam/Q3-8B-Kintsugi_mlx-4bpw") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```