--- library_name: transformers license: apache-2.0 datasets: - anthracite-org/kalo-opus-instruct-22k-no-refusal - Nopm/Opus_WritingStruct - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned - Gryphe/Sonnet3.5-Charcard-Roleplay - Gryphe/ChatGPT-4o-Writing-Prompts - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned - nothingiisreal/Reddit-Dirty-And-WritingPrompts - allura-org/Celeste-1.x-data-mixture - cognitivecomputations/dolphin-2.9.3 base_model: GoraPakora/QwenQwen2 tags: - generated_from_trainer - mlx - mlx-my-repo model-index: - name: EVA-Qwen2.5-32B-SFFT-v0.1 results: [] --- # Fmuaddib/QwenQwen2-mlx-8Bit The Model [Fmuaddib/QwenQwen2-mlx-8Bit](https://huggingface.co/Fmuaddib/QwenQwen2-mlx-8Bit) was converted to MLX format from [GoraPakora/QwenQwen2](https://huggingface.co/GoraPakora/QwenQwen2) using mlx-lm version **0.22.1**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Fmuaddib/QwenQwen2-mlx-8Bit") prompt="hello" if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```