--- license: apache-2.0 tags: - llama.cpp - gguf - query-expansion datasets: - s-emanuilov/query-expansion base_model: - Qwen/Qwen2.5-3B-GGUF --- # Query Expansion GGUF - based on Qwen2.5-3B GGUF quantized version of Qwen2.5-3B for query expansion task. Part of a collection of query expansion models available in different architectures and sizes. ## Overview **Task:** Search query expansion **Base model:** [Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B) **Training data:** [Query Expansion Dataset](https://huggingface.co/datasets/s-emanuilov/query-expansion) Query Expansion Model ## Quantized Versions Model available in multiple quantization formats: - F16 (Original size) - Q8_0 (~8-bit quantization) - Q5_K_M (~5-bit quantization) - Q4_K_M (~4-bit quantization) - Q3_K_M (~3-bit quantization) ## Related Models ### LoRA Adaptors - [Qwen2.5-3B](https://huggingface.co/s-emanuilov/query-expansion-Qwen2.5-3B) - [Qwen2.5-7B](https://huggingface.co/s-emanuilov/query-expansion-Qwen2.5-7B) - [Llama-3.2-3B](https://huggingface.co/s-emanuilov/query-expansion-Llama-3.2-3B) ### GGUF Variants - [Qwen2.5-7B-GGUF](https://huggingface.co/s-emanuilov/query-expansion-Qwen2.5-7B-GGUF) - [Llama-3.2-3B-GGUF](https://huggingface.co/s-emanuilov/query-expansion-Llama-3.2-3B-GGUF) ## Details This model is designed for enhancing search and retrieval systems by generating semantically relevant query expansions. It could be useful for: - Advanced RAG systems - Search enhancement - Query preprocessing - Low-latency query expansion ## Example **Input:** "apple stock" **Expansions:** - "apple market" - "apple news" - "apple stock price" - "apple stock forecast" ## Citation If you find my work helpful, feel free to give me a citation. ``` ```