Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
tritiumoxide
/
madlad400-7b-mt-bt-Q2_K-GGUF
like
0
Translation
Transformers
GGUF
JAX
allenai/MADLAD-400
419 languages
t5
text2text-generation
text-generation-inference
llama-cpp
gguf-my-repo
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
madlad400-7b-mt-bt-Q2_K-GGUF
Ctrl+K
Ctrl+K
1 contributor
History:
5 commits
tritiumoxide
should make this repo work with candle
5171663
10 months ago
.gitattributes
Safe
1.64 kB
should make this repo work with candle
10 months ago
README.md
Safe
4.43 kB
Upload README.md with huggingface_hub
10 months ago
config.json
Safe
805 Bytes
should make this repo work with candle
10 months ago
madlad400-7b-mt-bt-q2_k.gguf
Safe
3.21 GB
LFS
Upload madlad400-7b-mt-bt-q2_k.gguf with huggingface_hub
10 months ago
tokenizer.json
16.6 MB
LFS
should make this repo work with candle
10 months ago