|
--- |
|
library_name: transformers |
|
base_model: |
|
- Rombo-Org/Rombo-LLM-V3.0-Qwen-32b |
|
license: apache-2.0 |
|
datasets: |
|
- NovaSky-AI/Sky-T1_data_17k |
|
tags: |
|
- unsloth |
|
--- |
|
# Like my work? Support me on patreon for only $5 a month and get to vote on what model's i make next as well as get access to this org's private repo's |
|
Subscribe bellow: |
|
- Patreon.com/Rombodawg |
|
__________________________________________________ |
|
|
|
# Rombo-LLM-V3.0-Qwen-32b |
|
|
|
 |
|
|
|
Rombo-LLM-V3.0-Qwen-32b is a Continued Finetune model on top of the previous V2.5 version using the "NovaSky-AI/Sky-T1_data_17k" dataset. The resulting model was then merged backed into the base model for higher performance as written in the continuous finetuning technique bellow. This model is a good general purpose model, however it excells at coding and math. |
|
|
|
- https://docs.google.com/document/d/1OjbjU5AOz4Ftn9xHQrX3oFQGhQ6RDUuXQipnQ9gn6tU/edit?usp=sharing |
|
|
|
Original weights: |
|
|
|
- https://huggingface.co/Rombo-Org/Rombo-LLM-V3.0-Qwen-32b |
|
|
|
Benchmarks: (Coming soon) |