Falcon3-3B-Instruct-GGUF

Original Model

tiiuae/Falcon3-3B-Instruct

Run with LlamaEdge

  • LlamaEdge version: v0.16.0 and above

  • Prompt template

    • Prompt type: falcon3

    • Prompt string

      <|system|>
      You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible.
      <|user|>
      {user_message}
      <|assistant|>
      
  • Context size: 32000

  • Run as LlamaEdge service

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Falcon3-3B-Instruct-Q5_K_M.gguf \
      llama-api-server.wasm \
      --model-name Falcon3-3B-Instruct \
      --prompt-template falcon3 \
      --ctx-size 32000
    
  • Run as LlamaEdge command app

    wasmedge --dir .:. --nn-preload default:GGML:AUTO:Falcon3-3B-Instruct-Q5_K_M.gguf \
      llama-chat.wasm \
      --prompt-template falcon3 \
      --ctx-size 32000
    

Quantized GGUF Models

Name Quant method Bits Size Use case
Falcon3-3B-Instruct-Q2_K.gguf Q2_K 2 1.35 GB smallest, significant quality loss - not recommended for most purposes
Falcon3-3B-Instruct-Q3_K_L.gguf Q3_K_L 3 1.78 GB small, substantial quality loss
Falcon3-3B-Instruct-Q3_K_M.gguf Q3_K_M 3 1.67 GB very small, high quality loss
Falcon3-3B-Instruct-Q3_K_S.gguf Q3_K_S 3 1.55 GB very small, high quality loss
Falcon3-3B-Instruct-Q4_0.gguf Q4_0 4 1.92 GB legacy; small, very high quality loss - prefer using Q3_K_M
Falcon3-3B-Instruct-Q4_K_M.gguf Q4_K_M 4 2.01 GB medium, balanced quality - recommended
Falcon3-3B-Instruct-Q4_K_S.gguf Q4_K_S 4 1.93 GB small, greater quality loss
Falcon3-3B-Instruct-Q5_0.gguf Q5_0 5 2.28 GB legacy; medium, balanced quality - prefer using Q4_K_M
Falcon3-3B-Instruct-Q5_K_M.gguf Q5_K_M 5 2.32 GB large, very low quality loss - recommended
Falcon3-3B-Instruct-Q5_K_S.gguf Q5_K_S 5 2.28 GB large, low quality loss - recommended
Falcon3-3B-Instruct-Q6_K.gguf Q6_K 6 2.65 GB very large, extremely low quality loss
Falcon3-3B-Instruct-Q8_0.gguf Q8_0 8 3.43 GB very large, extremely low quality loss - not recommended
Falcon3-3B-Instruct-f16.gguf f16 16 6.46 GB

Quantized with llama.cpp b4381

Downloads last month
138
GGUF
Model size
3.23B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for second-state/Falcon3-3B-Instruct-GGUF

Unable to build the model tree, the base model loops to the model itself. Learn more.

Collection including second-state/Falcon3-3B-Instruct-GGUF