RickLLM
This is a GGUF format model uploaded using llama.cpp.
Model Details
- Model Format: GGUF (GPU/CPU inference using llama.cpp)
- Base Model: Unsloth
- Quantization: Q8_0
- Use Case: This model can be used with llama.cpp for efficient inference on both GPU and CPU.
Usage
This model can be used with llama.cpp. Example usage:
./main -m RickLLM.gguf -n 1024
License
Please refer to the original model's license for terms of use.
- Downloads last month
- 5
Hardware compatibility
Log In
to view the estimation
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.