GGUF quants of nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1
Using llama.cpp b5436 (commit be0239693c1530a18496086331fc18d8a9adbad1)
The importance matrix was generated with calibration_datav3.txt.
All quants were generated/calibrated with the imatrix, including the K quants.
Quantized from BF16.
- Downloads last month
- 436
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for redponike/Llama-3.1-Nemotron-Nano-4B-v1.1-GGUF
Base model
nvidia/Llama-3.1-Minitron-4B-Width-Base
Finetuned
nvidia/Llama-3.1-Nemotron-Nano-4B-v1.1