This repo contains the GGUF imatrix quants for k4yt3x/Arynia-LLaMA-70B.
The imatrx was computed from bartowski's calibration_datav3.txt
The following precisions are available:
- Downloads last month
- 111
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for k4yt3x/Arynia-LLaMA-70B-GGUF
Base model
k4yt3x/Arynia-LLaMA-70B