Uploaded model

  • Developed by: qingy2024
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-2-2b-bnb-4bit

Note: This model uses a custom chat template:

Below is the original text. Please rewrite it to correct any grammatical errors if any, improve clarity, and enhance overall readability.

### Original Text:
{PROMPT HERE}

### Corrected Text:
{MODEL'S OUTPUT HERE}

I would recommend a temperature of 0.0 and repeat penalty 1.0 for this model to get optimal results.

Downloads last month
20
GGUF
Model size
2.61B params
Architecture
gemma2
Hardware compatibility
Log In to view the estimation

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for qingy2024/GRMR-2B-Instruct-GGUF

Base model

google/gemma-2-2b
Quantized
(65)
this model

Collection including qingy2024/GRMR-2B-Instruct-GGUF