๐:Hi Fijik!
๐ค:Hello! What's up? How may I help?
What is it
This is a 1.0 Fijik series with **3 billion** parameters, dense 56 layer transformer LLM based on Qwen2.5, specifically, it was merged using Mergekit to be twice as large as Qwen2.5 1.5B.
After merging, we used a custom dataset mix meant for this model, to improve its performance even more.
- Step 1 for fine-tuning via unsloth: SFT on an estimated 20 million tokens. (more or less)
- Step 2 for the fine-tuning via unsloth: DPO for 2 epochs for even better instruction following. After these two steps, we got a powerful model which has less parameters than llama 3.1 8B yet performs just as good if not better, Note that unlike our other recent models, it is not a thinking model, yet it can reason quite well. Our theory behind this model is that a smaller yet deeper model can outperform for it's size.
Alibaba qwen states that Qwen2.5 was pre-trained on up to 18 trillion high-quality tokens. This model supports up to 32768 input tokens and can generate up to 8192 tokens.
What should Fijik be used for?
Fijik 1.0 3B is by design, meant to be a production-ready, general use, high-performance model, which is also small enough to be run at high token throughputs while minimising performance loss.
- We made some efforts at ensuring the model is safe while keeping it useable. In addition, it is sensitive to system prompts (in a good way, adheres to them well), so it is very customisable. We did not put in our fine-tuning data any information about the identity of the model; rather it just knows that it is a Large Language Model (LLM), but it does not know it is Fijik, unless you specify in the system prompt.
- Due to the large context of the model, It can be used for RAG, but like any other LLM out there, you should be aware that it may hallucinate.
- In our fine-tuning data we included quite a bit of creative writing examples, so the model is pretty good at it.
- Coding, Math: In our SFT, DPO fine-tuning data we have put an effort into improving coding and step-by-step math performance, while it is indeed not perfect, no LLM is.
Examples
What is 8.8-8.11?
(surprisingly a specific new SOTA llm got this wrong but our model didn't.)
Send
8.8 - 8.11 = 0.69
So, 8.8-8.11 equals 0.69.
Limitations
This model is not uncensored, yet it may produce erotic outputs. You are solely responsible for the outputs from the model. Like any other LLM, users and hosters alike should be aware that AI language models may hallucinate and produce inaccurate, dangerous, or even completly nonsensical outputs, all the information the model provides may seem accurate, but please, for important tasks always double check responses with credible sources.
Notices
This was the mergekit YAML config we used:
base_model: Qwen/Qwen2.5-1.5B-Instruct
merge_method: passthrough
slices:
- sources:
- model: Qwen/Qwen2.5-1.5B-Instruct
layer_range: [0, 21] # Lower layers
- sources:
- model: Qwen/Qwen2.5-Coder-1.5B-Instruct
layer_range: [8, 10] # Better coding performance
- sources:
- model: huihui-ai/Qwen2.5-1.5B-Instruct-abliterated
layer_range: [5, 24] # Mid layers
- sources:
- model: Unsloth/Qwen2.5-1.5B-Instruct
layer_range: [14, 28] # Higher layers
tokenizer_source: unsloth/Qwen2.5-1.5B-Instruct
dtype: bfloat16
Uploaded model
- Developed by: Pinkstack
- License: Apache 2.0
- Finetuned from model : Pinkstack/Fijik-3b-v1-sft
This Qwen2.5 model was trained with Unsloth and Huggingface's TRL library.
Citations
Magpie:
{
title={Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing},
author={Zhangchen Xu and Fengqing Jiang and Luyao Niu and Yuntian Deng and Radha Poovendran and Yejin Choi and Bill Yuchen Lin},
year={2024},
eprint={2406.08464},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Lion:
{
title={Symbolic Discovery of Optimization Algorithm},
author={Xiangning Chen},
year={2023},
eprint={2302.06675},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 45