EXL3 quants of microsoft/Phi-3-mini-4k-instruct, revision 65be4e00a56c16d036e9cbe96b0b35f8aa0f84b0
.
This corresponds to bartowski/Phi-3-mini-4k-instruct-old-GGUF.
Bits per Weight | Model Size | Link |
---|---|---|
2.0 bpw | 1.18 GB | oobabooga/Phi-3-mini-4k-instruct-old-exl3:2.0bpw |
2.25 bpw | 1.29 GB | oobabooga/Phi-3-mini-4k-instruct-old-exl3:2.25bpw |
2.5 bpw | 1.41 GB | oobabooga/Phi-3-mini-4k-instruct-old-exl3:2.5bpw |
3.0 bpw | 1.63 GB | oobabooga/Phi-3-mini-4k-instruct-old-exl3:3.0bpw |
3.5 bpw | 1.86 GB | oobabooga/Phi-3-mini-4k-instruct-old-exl3:3.5bpw |
4.0 bpw | 2.09 GB | oobabooga/Phi-3-mini-4k-instruct-old-exl3:4.0bpw |
5.0 bpw | 2.54 GB | oobabooga/Phi-3-mini-4k-instruct-old-exl3:5.0bpw |
6.0 bpw | 2.99 GB | oobabooga/Phi-3-mini-4k-instruct-old-exl3:6.0bpw |
8.0 bpw | 3.90 GB | oobabooga/Phi-3-mini-4k-instruct-old-exl3:8.0bpw |
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support