GGUF
Inference Endpoints

Some GGUF v2 quantizations of the model KnutJaegersberg/deacon-3b Which is based on conceptofmind/Open-LLongMA-3b so you will need to set linear rope_scaling to 0.25.

Prompt Example:

### System:

You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.


### Instruction: 

How do you fine tune a large language model? 

### Response:
Downloads last month
10
GGUF
Model size
3.43B params
Architecture
llama

4-bit

5-bit

Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train Aryanne/Deacon-3B-gguf