|
--- |
|
license: apache-2.0 |
|
base_model: |
|
- sarvamai/sarvam-m |
|
--- |
|
|
|
# Sarvam-M |
|
<p align="center"> |
|
<a href="https://dashboard.sarvam.ai/playground" |
|
target="_blank" rel="noopener noreferrer"> |
|
<img |
|
src="https://img.shields.io/badge/๐ Chat on Sarvam Playground-1488CC?style=for-the-badge&logo=rocket" |
|
alt="Chat on Sarvam Playground" |
|
/> |
|
</a> |
|
</p> |
|
|
|
# Model Information |
|
|
|
> [!Note] |
|
> This repository contains gguf version of [`sarvam-m`](https://huggingface.co/sarvamai/sarvam-m) in q8 precision. |
|
|
|
Learn more about sarvam-m in our detailed [blog post](https://www.sarvam.ai/blogs/sarvam-m). |
|
|
|
# Running the model on a CPU |
|
|
|
You can use the model on your local machine (without gpu) as explained [here](https://github.com/ggml-org/llama.cpp/tree/master/tools/main). |
|
|
|
Example Command: |
|
``` |
|
./build/bin/llama-cli -i -m /your/folder/path/sarvam-m-q8_0.gguf -c 8192 -t 16 |
|
``` |