metadata
base_model: MaziyarPanahi/calme-3.3-instruct-3b
datasets:
- MaziyarPanahi/french_instruct_sharegpt
- arcee-ai/EvolKit-20k
language:
- fr
- en
library_name: transformers
license: other
license_name: qwen-research
license_link: https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- qwen
- qwen2.5
- finetune
- french
- english
- llama-cpp
- matrixportal
inference: false
model_creator: MaziyarPanahi
quantized_by: MaziyarPanahi
model-index:
- name: calme-3.3-instruct-3b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 64.23
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-3.3-instruct-3b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 25.68
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-3.3-instruct-3b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-3.3-instruct-3b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.36
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-3.3-instruct-3b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.4
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-3.3-instruct-3b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.62
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=MaziyarPanahi/calme-3.3-instruct-3b
name: Open LLM Leaderboard
ysn-rfd/calme-3.3-instruct-3b-GGUF
This model was converted to GGUF format from MaziyarPanahi/calme-3.3-instruct-3b
using llama.cpp via the ggml.ai's all-gguf-same-where space.
Refer to the original model card for more details on the model.
β Quantized Models Download List
π Recommended Quantizations
- β¨ General CPU Use:
Q4_K_M
(Best balance of speed/quality) - π± ARM Devices:
Q4_0
(Optimized for ARM CPUs) - π Maximum Quality:
Q8_0
(Near-original quality)
π¦ Full Quantization Options
π Download | π’ Type | π Notes |
---|---|---|
Download | Basic quantization | |
Download | Small size | |
Download | Balanced quality | |
Download | Better quality | |
Download | Fast on ARM | |
Download | Fast, recommended | |
Download | Best balance | |
Download | Good quality | |
Download | Balanced | |
Download | High quality | |
Download | Very good quality | |
Download | Fast, best quality | |
Download | Maximum accuracy |
π‘ Tip: Use F16
for maximum precision when quality is critical
π Applications and Tools for Locally Quantized LLMs
π₯οΈ Desktop Applications
Application | Description | Download Link |
---|---|---|
Llama.cpp | A fast and efficient inference engine for GGUF models. | GitHub Repository |
Ollama | A streamlined solution for running LLMs locally. | Website |
AnythingLLM | An AI-powered knowledge management tool. | GitHub Repository |
Open WebUI | A user-friendly web interface for running local LLMs. | GitHub Repository |
GPT4All | A user-friendly desktop application supporting various LLMs, compatible with GGUF models. | GitHub Repository |
LM Studio | A desktop application designed to run and manage local LLMs, supporting GGUF format. | Website |
GPT4All Chat | A chat application compatible with GGUF models for local, offline interactions. | GitHub Repository |
π± Mobile Applications
Application | Description | Download Link |
---|---|---|
ChatterUI | A simple and lightweight LLM app for mobile devices. | GitHub Repository |
Maid | Mobile Artificial Intelligence Distribution for running AI models on mobile devices. | GitHub Repository |
PocketPal AI | A mobile AI assistant powered by local models. | GitHub Repository |
Layla | A flexible platform for running various AI models on mobile devices. | Website |
π¨ Image Generation Applications
Application | Description | Download Link |
---|---|---|
Stable Diffusion | An open-source AI model for generating images from text. | GitHub Repository |
Stable Diffusion WebUI | A web application providing access to Stable Diffusion models via a browser interface. | GitHub Repository |
Local Dream | Android Stable Diffusion with Snapdragon NPU acceleration. Also supports CPU inference. | GitHub Repository |
Stable-Diffusion-Android (SDAI) | An open-source AI art application for Android devices, enabling digital art creation. | GitHub Repository |