--- license: apache-2.0 language: - zh - en tags: - moe --- # Chinese-Mixtral-Instruct
**Chinese Mixtral GitHub repository: https://github.com/ymcui/Chinese-Mixtral** This repository contains **Chinese-Mixtral-Instruct**, which is further tuned with instruction data on [Chinese-Mixtral](https://huggingface.co/hfl/chinese-mixtral), where Chinese-Mixtral is build on top of [Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1). **Note: this is an instruction (chat) model, which can be used for conversation, QA, etc.** ## Others - For LoRA-only model, please see: https://huggingface.co/hfl/chinese-mixtral-instruct-lora - For GGUF model (llama.cpp compatible), please see: https://huggingface.co/hfl/chinese-mixtral-instruct-gguf - If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-Mixtral/. ## Citation Please consider cite our paper if you use the resource of this repository. Paper link: https://arxiv.org/abs/2403.01851 ``` @article{chinese-mixtral, title={Rethinking LLM Language Adaptation: A Case Study on Chinese Mixtral}, author={Cui, Yiming and Yao, Xin}, journal={arXiv preprint arXiv:2403.01851}, url={https://arxiv.org/abs/2403.01851}, year={2024} } ```