Question Answering
Transformers
Safetensors
English
doge
text-generation
custom_code
JingzeShi commited on
Commit
a60a49a
verified
1 Parent(s): 793996d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -22,18 +22,18 @@ pipeline_tag: question-answering
22
  <a href="https://arxiv.org/abs/2412.11834" target="_blank" style="margin: 2px;">
23
  <img alt="arXiv" src="https://img.shields.io/static/v1?label=arXiv&message=2412.11834&color=B31B1B&logo=arXiv" style="display: inline-block; vertical-align: middle;"/>
24
  </a>
25
- <a href="https://github.com/SamllDoge/small-doge" target="_blank" style="margin: 2px;">
26
  <img alt="GitHub" src="https://img.shields.io/badge/GitHub-SmallDoge-181717?logo=github" style="display: inline-block; vertical-align: middle;"/>
27
  </a>
28
  <a href="https://huggingface.co/SmallDoge" target="_blank" style="margin: 2px;">
29
  <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-SmallDoge-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
30
  </a>
31
- <a href="https://github.com/SamllDoge/small-doge/blob/main/LICENSE" style="margin: 2px;">
32
  <img alt="License" src="https://img.shields.io/badge/License-Apache--2.0-blue.svg" style="display: inline-block; vertical-align: middle;"/>
33
  </a>
34
  </div>
35
 
36
- Doge uses Dynamic Mask Attention as sequence transformation and can use Multi-Layer Perceptron or Cross Domain Mixture of Experts as state transformation. Dynamic Mask Attention allows the Transformer to use self-attention during training and state space during inference, and Cross Domain Mixture of Experts can directly inherit the weights of Multi-Layer Perceptron for further training. This model is trained by [SmallDoge](https://huggingface.co/SmallDoge) community, for detailed algorithm and model architecture, please refer to [Wonderful Matrices](https://arxiv.org/abs/2412.11834), all training details and code are publicly available on the [small-doge](https://github.com/SamllDoge/small-doge) repository.
37
 
38
 
39
  ## Uses
 
22
  <a href="https://arxiv.org/abs/2412.11834" target="_blank" style="margin: 2px;">
23
  <img alt="arXiv" src="https://img.shields.io/static/v1?label=arXiv&message=2412.11834&color=B31B1B&logo=arXiv" style="display: inline-block; vertical-align: middle;"/>
24
  </a>
25
+ <a href="https://github.com/SmallDoges/small-doge" target="_blank" style="margin: 2px;">
26
  <img alt="GitHub" src="https://img.shields.io/badge/GitHub-SmallDoge-181717?logo=github" style="display: inline-block; vertical-align: middle;"/>
27
  </a>
28
  <a href="https://huggingface.co/SmallDoge" target="_blank" style="margin: 2px;">
29
  <img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-SmallDoge-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
30
  </a>
31
+ <a href="https://github.com/SmallDoges/small-doge/blob/main/LICENSE" style="margin: 2px;">
32
  <img alt="License" src="https://img.shields.io/badge/License-Apache--2.0-blue.svg" style="display: inline-block; vertical-align: middle;"/>
33
  </a>
34
  </div>
35
 
36
+ Doge uses Dynamic Mask Attention as sequence transformation and can use Multi-Layer Perceptron or Cross Domain Mixture of Experts as state transformation. Dynamic Mask Attention allows the Transformer to use self-attention during training and state space during inference, and Cross Domain Mixture of Experts can directly inherit the weights of Multi-Layer Perceptron for further training. This model is trained by [SmallDoge](https://huggingface.co/SmallDoge) community, for detailed algorithm and model architecture, please refer to [Wonderful Matrices](https://arxiv.org/abs/2412.11834), all training details and code are publicly available on the [small-doge](https://github.com/SmallDoges/small-doge) repository.
37
 
38
 
39
  ## Uses