RickLLM

This is a GGUF format model uploaded using llama.cpp.

Model Details

  • Model Format: GGUF (GPU/CPU inference using llama.cpp)
  • Base Model: Unsloth
  • Quantization: Q8_0
  • Use Case: This model can be used with llama.cpp for efficient inference on both GPU and CPU.

Usage

This model can be used with llama.cpp. Example usage:

./main -m RickLLM.gguf -n 1024

License

Please refer to the original model's license for terms of use.

Downloads last month
5
GGUF
Model size
8.03B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support