Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Posts
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

MaziyarPanahi
/
gemma-7b-it-GGUF

Text Generation
Transformers
GGUF
Safetensors
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
gemma
conversational
arxiv:2312.11805
arxiv:2009.03300
arxiv:1905.07830
arxiv:1911.11641
arxiv:1904.09728
arxiv:1905.10044
arxiv:1907.10641
arxiv:1811.00937
arxiv:1809.02789
arxiv:1911.01547
arxiv:1705.03551
arxiv:2107.03374
arxiv:2108.07732
arxiv:2110.14168
arxiv:2304.06364
arxiv:2206.04615
arxiv:1804.06876
arxiv:2110.08193
arxiv:2009.11462
arxiv:2101.11718
arxiv:1804.09301
arxiv:2109.07958
arxiv:2203.09509
has_space
text-generation-inference
Model card Files Files and versions Community
2
gemma-7b-it-GGUF
Ctrl+K
Ctrl+K
  • 1 contributor
History: 7 commits
MaziyarPanahi's picture
MaziyarPanahi
Upload folder using huggingface_hub
fc1d4d2 verified about 1 year ago
  • .gitattributes
    2.17 kB
    Upload folder using huggingface_hub about 1 year ago
  • README.md
    11.2 kB
    Upload folder using huggingface_hub about 1 year ago
  • config.json
    31 Bytes
    Upload folder using huggingface_hub about 1 year ago
  • gemma-7b-it.Q2_K.gguf
    3.48 GB
    LFS
    Upload folder using huggingface_hub (#1) about 1 year ago
  • gemma-7b-it.Q3_K_L.gguf
    4.71 GB
    LFS
    Upload folder using huggingface_hub (#1) about 1 year ago
  • gemma-7b-it.Q3_K_M.gguf
    4.37 GB
    LFS
    Upload folder using huggingface_hub (#1) about 1 year ago
  • gemma-7b-it.Q3_K_S.gguf
    3.98 GB
    LFS
    Upload folder using huggingface_hub (#1) about 1 year ago
  • gemma-7b-it.Q4_K_M.gguf
    5.33 GB
    LFS
    Upload folder using huggingface_hub (#1) about 1 year ago
  • gemma-7b-it.Q4_K_S.gguf
    5.05 GB
    LFS
    Upload folder using huggingface_hub (#1) about 1 year ago
  • gemma-7b-it.Q5_K_M.gguf
    6.14 GB
    LFS
    Upload folder using huggingface_hub (#1) about 1 year ago
  • gemma-7b-it.Q5_K_S.gguf
    5.98 GB
    LFS
    Upload folder using huggingface_hub (#1) about 1 year ago
  • gemma-7b-it.Q6_K.gguf
    7.01 GB
    LFS
    Upload folder using huggingface_hub (#1) about 1 year ago
  • gemma-7b-it.Q8_0.gguf
    9.08 GB
    LFS
    Upload folder using huggingface_hub (#1) about 1 year ago
  • gemma-7b-it.fp16.gguf
    17.1 GB
    LFS
    Upload folder using huggingface_hub (#1) about 1 year ago