Text Generation
Transformers
KaraKaraWitch nielsr HF Staff commited on
Commit
cf60020
·
verified ·
1 Parent(s): 0d46fd9

Add Project Page Link and Github Link to Metadata (#4)

Browse files

- Add Project Page Link and Github Link to Metadata (70197c3379eae934ef63f1835fb9478ce0826c85)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -1,7 +1,9 @@
1
  ---
2
- license: apache-2.0
3
  library_name: transformers
 
4
  pipeline_tag: text-generation
 
 
5
  ---
6
 
7
  This repository contains various checkpoints for ablations and other unusual models from the paper [RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale](https://huggingface.co/papers/2505.03005).
@@ -31,4 +33,4 @@ The file numbering is currently off by one from the step numbers shown in the pa
31
  |L28-D3584-qwerky7_qwen2-3-4k-ckpt5.pth|2|Qwen2.5-7B-Instruct|RAD-RWKV7|4k ctxlen training, early checkpoint|
32
  |L28-D3584-qwerky7_qwen2-3-4k.pth|2|Qwen2.5-7B-Instruct|RAD-RWKV7|4k ctxlen training|
33
 
34
- More information can be found at the Github repository: https://github.com/recursal/RADLADS-paper
 
1
  ---
 
2
  library_name: transformers
3
+ license: apache-2.0
4
  pipeline_tag: text-generation
5
+ project_page: https://sites.google.com/view/eagle-llm
6
+ repo_url: https://github.com/recursal/RADLADS-paper
7
  ---
8
 
9
  This repository contains various checkpoints for ablations and other unusual models from the paper [RADLADS: Rapid Attention Distillation to Linear Attention Decoders at Scale](https://huggingface.co/papers/2505.03005).
 
33
  |L28-D3584-qwerky7_qwen2-3-4k-ckpt5.pth|2|Qwen2.5-7B-Instruct|RAD-RWKV7|4k ctxlen training, early checkpoint|
34
  |L28-D3584-qwerky7_qwen2-3-4k.pth|2|Qwen2.5-7B-Instruct|RAD-RWKV7|4k ctxlen training|
35
 
36
+ More information can be found at the Github repository: [https://github.com/recursal/RADLADS-paper](https://github.com/recursal/RADLADS-paper)