Model overview
This model is finetuned on a merged dataset of: oasst1-en, alpaca-cleaned and airoboros-2.1-no-code on a base model: Marx-3b-V2
- License: "
Creative-Commons-Attribution-4.0
" - Language: "
en
" - Size: "
3.43b params
"
Prompt template
Prompt template:
### SYSTEM:
<system_prompt_here>
### HUMAN:
<prompter_message_here>
### INPUT:
<input_text_here>
### RESPONSE:
<leave_a_blank_line_here>
Note: If you dont have a system or input text, do not include the tokens in the prompt.
Training Details
This model took 2:40:54
to train in LoRA on a single A100 40gb
GPU.
- epochs:
1
- train batch size:
8
- eval batch size:
8
- gradient accumulation steps:
1
- maximum gradient normal:
0.3
- learning rate:
2e-4
- weight decay:
0.001
- optimizer:
paged_adamw_32bit
- learning rate schedule:
cosine
- warmup ratio (linear):
0.03
- Downloads last month
- 78
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.