File size: 5,490 Bytes
1fd6538 fdb2435 1fd6538 fdb2435 1fd6538 84cbc69 1fd6538 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 |
---
license: apache-2.0
base_model:
- open-r1/OlympicCoder-7B
- Qwen/Qwen2.5-Coder-7B-Instruct
- zhuyaoyu/CodeV-R1-Qwen-7B
- TIGER-Lab/VisCoder-7B
- julien31/Soar-qwen-7b
- Tesslate/Tessa-Rust-T1-7B
- Snowflake/Arctic-Text2SQL-R1-7B
- westenfelder/Qwen2.5-Coder-7B-Instruct-NL2SH
language:
- en
pipeline_tag: text-generation
tags:
- merge
- programming
- code generation
- code
- codeqwen
- moe
- coding
- coder
- qwen2
- chat
- qwen
- qwen-coder
- mixture of experts
- qwen2moe
- 8X7B
- shared expert
library_name: transformers
---
<h2>Qwen2.5-8x7B-Vee-Eight-Coder-Instruct-53B-128k-ctx</h2>
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
The source code can also be used directly.
Coder MOE with 8 top coder models in a Mixture of Experts config, using the full power of each model to code in a 53B model.
Included:
- Qwen/Qwen2.5-Coder-7B-Instruct (500+ likes; all major + many minor programming languages)
- open-r1/OlympicCoder-7B (179+ likes; all major + many minor programming languages)
- zhuyaoyu/CodeV-R1-Qwen-7B (a model that employs reinforcement learning (RL) fine-tuning)
- TIGER-Lab/VisCoder-7B (a large language model fine-tuned for Python visualization code generation and multi-turn self-correction)
- julien31/Soar-qwen-7b (By creating a "virtuous cycle" of evolutionary search and learning, SOAR enables AI models to bootstrap their own capabilities and solve problems previously beyond their reach. )
- Tesslate/Tessa-Rust-T1-7B (leverages advanced reasoning to autonomously generate well-structured, idiomatic Rust code, including functions, structs, traits, and modules)
- westenfelder/Qwen2.5-Coder-7B-Instruct-NL2SH (trained on the NL2SH-ALFA dataset for the task of natural language to Bash translation (NL2SH))
- Snowflake/Arctic-Text2SQL-R1-7B (Text-to-SQL model fine-tuned using Group Relative Policy Optimization (GRPO) with a simple execution-based reward signal. It converts natural language questions into executable SQL queries)
EIGHT models all working together to code.
Default config is 2 experts (of 8 activated).
Suggest using 2-8 experts for maximum power.
NOTE: All experts help with coding, regardless of how many you have activated.
SETTINGS:
- Temp .5 to .7 (or lower)
- Max Context is 128k
- topk: 20, topp: .8, minp: .05
- rep pen: 1.05-1.1 (can be lower)
- Jinja Template (embedded) or CHATML template.
- A System Prompt is not required. (ran tests with blank system prompt)
MODELS in THIS MOE - see each for more information, benchmarks and how they operate:
https://huggingface.co/open-r1/OlympicCoder-7B
https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct
https://huggingface.co/zhuyaoyu/CodeV-R1-Qwen-7B
https://huggingface.co/TIGER-Lab/VisCoder-7B
https://huggingface.co/julien31/Soar-qwen-7b
https://huggingface.co/Tesslate/Tessa-Rust-T1-7B
https://huggingface.co/Snowflake/Arctic-Text2SQL-R1-7B
https://huggingface.co/westenfelder/Qwen2.5-Coder-7B-Instruct-NL2SH
---
<B>QUANTS:</b>
---
Special Thanks to Team Mradermacher for the quants:
GGUF:
https://huggingface.co/mradermacher/Qwen2.5-8x7B-Vee-Eight-Coder-Instruct-53B-128k-ctx-GGUF
GGUF-IMATRIX:
https://huggingface.co/mradermacher/Qwen2.5-8x7B-Vee-Eight-Coder-Instruct-53B-128k-ctx-i1-GGUF
---
For more information / other Qwen/Mistral Coders / additional settings see:
[ https://huggingface.co/DavidAU/Qwen2.5-MOE-2x-4x-6x-8x__7B__Power-CODER__19B-30B-42B-53B-gguf ]
---
<H2>Help, Adjustments, Samplers, Parameters and More</H2>
---
<B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B>
See this document:
https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
<B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
OTHER OPTIONS:
- Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
- If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
<B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ] |