Brianpuz's picture
Upload README.md with huggingface_hub
5981bfa verified
metadata
license: apache-2.0
language:
  - en
pipeline_tag: text-generation
tags:
  - chat
  - llama-cpp
  - gguf-my-repo
base_model: Qwen/Qwen2-0.5B-Instruct

Brianpuz/Qwen2-0.5B-Instruct-Q4_K_M-GGUF

Absolutely tremendous! This repo features GGUF quantized versions of Qwen/Qwen2-0.5B-Instruct — made possible using the very powerful llama.cpp. Believe me, it's fast, it's smart, it's winning.

Quantized Versions:

Only the best quantization. You’ll love it.

Run with llama.cpp

Just plug it in, hit the command line, and boom — you're running world-class AI, folks:

llama-cli --hf-repo Brianpuz/Qwen2-0.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2-0.5b-instruct-q4_k_m.gguf -p "AI First, but also..."

This beautiful Hugging Face Space was brought to you by the amazing team at Antigma Labs. Great people. Big vision. Doing things that matter — and doing them right. Total winners.