File size: 2,447 Bytes
2cbcd75 599b236 2cbcd75 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
base_model: zed-industries/zeta
pipeline_tag: text-generation
inference: true
language:
- en
license: apache-2.0
model_creator: zed-industries
model_name: zeta
model_type: qwen2
datasets:
- zed-industries/zeta
quantized_by: brittlewis12
tags:
- qwen2
---
# Zeta GGUF
**Original model**: [Zeta](https://huggingface.co/zed-industries/zeta)
**Model creator**: [Zed Industries](https://huggingface.co/zed-industries)
> This is a fine-tuned version of Qwen2.5-Coder-7b for edit prediction support in Zed. Please, refer to the [zeta dataset](https://huggingface.co/datasets/zed-industries/zeta) to see how you can train this model yourself.
This repo contains GGUF format model files for Zed Industries’ Zeta model, powering their [new "Edit Prediction" feature](https://zed.dev/blog/edit-prediction) in their open source text editor, [Zed](https://zed.dev).
### What is GGUF?
GGUF is a file format for representing AI models. It is the third version of the format,
introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Converted with llama.cpp build 4710 (revision [8a8c4ce](https://github.com/ggerganov/llama.cpp/commits/8a8c4ceb6050bd9392609114ca56ae6d26f5b8f5)),
using [autogguf-rs](https://github.com/brittlewis12/autogguf-rs).
### Prompt template: ChatML
```
<|im_start|>system
{{system_message}}<|im_end|>
<|im_start|>user
{{prompt}}<|im_end|>
<|im_start|>assistant
```
---
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!

[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
- create & save **Characters** with custom system prompts & temperature settings
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
* or, use an API key with the chat completions-compatible model provider of your choice -- ChatGPT, Claude, Gemini, DeepSeek, & more!
- make it your own with custom **Theme colors**
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
|