--- base_model: - prithivMLmods/Primal-Opus-14B-Optimus-v1 - prithivMLmods/Megatron-Opus-14B-Exp - prithivMLmods/Calcium-Opus-14B-Elite2-R1 library_name: transformers tags: - mergekit - merge model-index: - name: Megatron-Opus-14B-Stock results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: wis-k/instruction-following-eval split: train args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 51.74 name: averaged accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMegatron-Opus-14B-Stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: SaylorTwift/bbh split: test args: num_few_shot: 3 metrics: - type: acc_norm value: 48.13 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMegatron-Opus-14B-Stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: lighteval/MATH-Hard split: test args: num_few_shot: 4 metrics: - type: exact_match value: 32.78 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMegatron-Opus-14B-Stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa split: train args: num_few_shot: 0 metrics: - type: acc_norm value: 16.67 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMegatron-Opus-14B-Stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 20.19 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMegatron-Opus-14B-Stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 47.7 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FMegatron-Opus-14B-Stock name: Open LLM Leaderboard --- # **Megatron-Opus-14B-Stock** [ Megatron+Primal+Elite2 ] is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. It has been fine-tuned on a Synthetic dataset entries based on one half of Qwen’s QWQ and DeepSeek R1, further optimizing its chain-of-thought (CoT) reasoning and logical problem-solving abilities. The model demonstrates significant improvements in context understanding, structured data processing, and long-context comprehension, making it ideal for complex reasoning tasks, instruction-following, and text generation. # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [prithivMLmods/Megatron-Opus-14B-Exp](https://huggingface.co/prithivMLmods/Megatron-Opus-14B-Exp) as a base. ### Models Merged The following models were included in the merge: * [prithivMLmods/Primal-Opus-14B-Optimus-v1](https://huggingface.co/prithivMLmods/Primal-Opus-14B-Optimus-v1) * [prithivMLmods/Calcium-Opus-14B-Elite2-R1](https://huggingface.co/prithivMLmods/Calcium-Opus-14B-Elite2-R1) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: model_stock base_model: prithivMLmods/Megatron-Opus-14B-Exp tokenizer_source: base dtype: bfloat16 out_dtype: bfloat16 parameters: int8_mask: true normalize: true rescale: false models: - model: prithivMLmods/Megatron-Opus-14B-Exp - model: prithivMLmods/Primal-Opus-14B-Optimus-v1 - model: prithivMLmods/Calcium-Opus-14B-Elite2-R1 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Megatron-Opus-14B-Stock-details)! Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FMegatron-Opus-14B-Stock&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)! | Metric |Value (%)| |-------------------|--------:| |**Average** | 36.20| |IFEval (0-Shot) | 51.74| |BBH (3-Shot) | 48.13| |MATH Lvl 5 (4-Shot)| 32.78| |GPQA (0-shot) | 16.67| |MuSR (0-shot) | 20.19| |MMLU-PRO (5-shot) | 47.70|