This repo contains Gemma-3-1B 8-bit quantized rkllm model
Source of the model: Gemma-3-1B-it
Model was compiled to rkllm format using rkllm-toolkit v1.2.0
Chat with this model can be easily done on Axon SBC using vicharak-chat app.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support