OpenCerebrum-2.0-7B
OpenCerebrum-2.0-7B is an open-source language model fine-tuned from the alpindale/Mistral-7B-v0.2-hf base model on a diverse dataset aimed at replicating capabilities of Aether Research's proprietary Cerebrum model.
The model was fine-tuned with SFT and DPO on approximately 7,000 examples across 15 data sources spanning coding, math, science, multi-turn conversation, RAG, reasoning, and general instruction-following. The goal was to assemble public datasets that could help the model achieve strong performance on benchmarks where Cerebrum excels.
Model Details
- Base Model: alpindale/Mistral-7B-v0.2-hf
- Parameters: 7 billion
- Fine-Tuning Dataset Size: ~7,000 examples
- Fine-Tuning Data: Advanced in-house curation techniques at Cognitive Computations, with 15 different data sources for DPO and SFT.
- Language: English
- License: Apache 2.0
Intended Use
OpenCerebrum-2.0-7B is intended to be a powerful open-source model for coding, math, science, and general question-answering and text generation tasks. Its diverse fine-tuning data aims to equip it with broad knowledge and reasoning capabilities.
However, as an open-source replica trained on a subset of data compared to the original Cerebrum, it may not match Cerebrum's full performance. Additionally, biases and limitations of the fine-tuning data may be reflected in the model's outputs.
Limitations and Biases
- The model may have biases and limitations inherited from its fine-tuning datasets. Thorough testing is needed to characterize these.
- As the model is based on a 7B parameter model, it has computational and memory constraints compared to larger models.
- Downloads last month
- 8