GemMoE-Base-Hidden / README.md
Crystalcareai's picture
Create README.md
89644b8 verified
|
raw
history blame
2.85 kB
# GemMoE: Sharing Tools and Improved Base Models
I'm excited to share the tools I used to create GemMoE and release improved base models for the community to explore and build upon.
## Updates to GemMoE-Beta-1
GemMoE-Beta-1 will continue to serve as the repository for the `modeling_files` required to operate the Mixture of Experts (MoEs). However, I will be removing the PyTorch files from this repository.
## Sponsors
I am currently looking for compute sponsors to continue the post-training of GemMoE, to bring it to its full potential. I've reached the limit to what I'm able to personally spend on this project. If you're interesting in helping out, please reach out.
## New Models
I'm introducing two new models:
1. [Crystalcareai/GemMoE-Base-Hidden](https://huggingface.co/Crystalcareai/GemMoE-Base-Hidden)
- This is a new MoE created using an improved method that I will explain below.
- It utilizes a hidden gate and shows strong potential.
- The model has not been altered and requires finetuning to reach its full potential.
- If you're looking to achieve great performance with relatively minimal training, this is an excellent starting point.
2. [Crystalcareai/GemMoE-Base-Random](https://huggingface.co/Crystalcareai/GemMoE-Base-Random)
- This model was created using the same merge method as GemMoE-Base-Hidden, but with a RANDOM gate.
- It randomly selects the experts during the merging process.
- With finetuning, the model learns to choose the appropriate experts naturally, potentially leading to better results compared to GemMoE-Base-Hidden.
- This method offers an intriguing mix between clown-car and mixtral-style approaches.
The new merge method and modeling files also reduce VRAM usage, making the models easier to finetune.
## Training Experiences and Challenges
I have successfully trained the models on a single A100 using Qlora, although it required careful monitoring and posed some difficulties. It appears there is currently an issue with Qlora and GemMoE. I observed better VRAM usage when using 4 A6000 cards and finetuning with Dora without any quantization and deepspeed_Zero3.
## Creating Your Own Merges
You can create your own merges using my modified branch of mergekit:
```bash
git clone -b gemmoe https://github.com/Crystalcareai/mergekit.git
```
To create an exact replica of Crystalcareai/GemMoE-Base-Hidden, use the following command:
```bash
mergekit-moe examples/gemmoe.yml ./merged --cuda --lazy-unpickle --allow-crimes
```
Feel free to modify the `/examples/gemmoe.yml` file to customize the merge according to your preferences.
Alternatively, you can use my modified lazymergekit available on Colab: [lazymergekit-Gemmoe](https://colab.research.google.com/drive/1WWxCE4NYvJNZkjFhkL79cf-dRc3xTpGn?usp=drive_link)
Happy experimenting and building!
-Lucas