medgemma-27b-text-it-GGUF

Original Model

google/medgemma-27b-text-it

Run with LlamaEdge

  • LlamaEdge version: v0.18.5 and above

  • Prompt template

    • Prompt type: gemma-3

    • Prompt string

      <bos><start_of_turn>user
      {user_message}<end_of_turn>
      <start_of_turn>model
      {model_message}<end_of_turn>model
      
  • Context size: 128000

  • Run as LlamaEdge service

    • Chat

      wasmedge --dir .:. --nn-preload default:GGML:AUTO:medgemma-27b-text-it-Q5_K_M.gguf \
        llama-api-server.wasm \
        --prompt-template gemma-3 \
        --ctx-size 128000 \
        --model-name medgemma-27b
      
    • Images

      Note that input images are required to be normalized to 896 x 896 resolution and encoded to 256 tokens each

      wasmedge --dir .:. --nn-preload default:GGML:AUTO:medgemma-27b-text-it-Q5_K_M.gguf \
        llama-api-server.wasm \
        --prompt-template gemma-3 \
        --llava-mmproj medgemma-27b-text-it-mmproj.gguf \
        --ctx-size 128000 \
        --model-name medgemma-27b
      
  • Run as LlamaEdge command app

    wasmedge --dir .:. \
      --nn-preload default:GGML:AUTO:medgemma-27b-text-it-Q5_K_M.gguf \
      llama-chat.wasm \
      --prompt-template gemma-3 \
      --ctx-size 128000
    

Quantized GGUF Models

Name Quant method Bits Size Use case
medgemma-27b-text-it-Q2_K.gguf Q2_K 2 10.5 GB smallest, significant quality loss - not recommended for most purposes
medgemma-27b-text-it-Q3_K_L.gguf Q3_K_L 3 14.5 GB small, substantial quality loss
medgemma-27b-text-it-Q3_K_M.gguf Q3_K_M 3 13.4 GB very small, high quality loss
medgemma-27b-text-it-Q3_K_S.gguf Q3_K_S 3 12.2 GB very small, high quality loss
medgemma-27b-text-it-Q4_0.gguf Q4_0 4 15.6 GB legacy; small, very high quality loss - prefer using Q3_K_M
medgemma-27b-text-it-Q4_K_M.gguf Q4_K_M 4 16.5 GB medium, balanced quality - recommended
medgemma-27b-text-it-Q4_K_S.gguf Q4_K_S 4 15.7 GB small, greater quality loss
medgemma-27b-text-it-Q5_0.gguf Q5_0 5 18.8 GB legacy; medium, balanced quality - prefer using Q4_K_M
medgemma-27b-text-it-Q5_K_M.gguf Q5_K_M 5 19.3 GB large, very low quality loss - recommended
medgemma-27b-text-it-Q5_K_S.gguf Q5_K_S 5 18.8 GB large, low quality loss - recommended
medgemma-27b-text-it-Q6_K.gguf Q6_K 6 22.2 GB very large, extremely low quality loss
medgemma-27b-text-it-Q8_0.gguf Q8_0 8 28.7 GB very large, extremely low quality loss - not recommended
medgemma-27b-text-it-f16-00001-of-00002.gguf f16 16 30.0 GB
medgemma-27b-text-it-f16-00002-of-00002.gguf f16 16 24.1 GB
medgemma-27b-text-it-mmproj.gguf f16 16 858 MB

Quantized with llama.cpp b5452

Downloads last month
74
GGUF
Model size
27B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for second-state/medgemma-27b-text-it-GGUF

Quantized
(11)
this model