|
--- |
|
license: bigcode-openrail-m |
|
pipeline_tag: text-generation |
|
library_name: gguf |
|
--- |
|
GGUF quants for https://huggingface.co/bigcode/starcoder2-15b |
|
|
|
> StarCoder2-15B model is a 15B parameter model trained on 600+ programming languages from The Stack v2, with opt-out requests excluded. The model uses Grouped Query Attention, a context window of 16,384 tokens with a sliding window attention of 4,096 tokens, and was trained using the Fill-in-the-Middle objective on 4+ trillion tokens. |
|
|
|
| Layers | Context | [Template (Text Representation)](https://github.com/ContextualAI/gritlm?tab=readme-ov-file#inference) | [Template (Text Generation)](https://github.com/ContextualAI/gritlm?tab=readme-ov-file#inference) | |
|
| --- | --- | --- | --- | |
|
| <pre>40</pre> | <pre>16384</pre> | <pre>{context}<br><br>Code Editing Instruction: {prompt}<br>{response}</pre> | |
|
|