Datasets:

Languages:
English
ArXiv:
License:
bhatta1 Hajar-Emami commited on
Commit
54c5dbd
·
verified ·
1 Parent(s): 6f51ba5

Update README.md (#6)

Browse files

- Update README.md (2c55bfec4767eea478e4682cdaffe1837191f965)


Co-authored-by: Hajar Emami <[email protected]>

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -39,7 +39,7 @@ Hugging Face introduced FineWeb V1.1.0, a large-scale dataset for LLM pre-traini
39
 
40
  These were applied in the order shown in the Fig 1
41
 
42
- <img src="GneissWeb3.png" alt="GneissWeb3" style="width:1000px;"/>
43
 
44
  **Figure 1 :** GneissWeb recipe
45
 
@@ -51,7 +51,7 @@ To compare GneissWeb against the baselines, we trained decoder models of 7B para
51
 
52
  We used FineWeb 1.1.0 and FineWeb-Edu-score2 as our comparison baselines. The former is of 15T tokens and the latter of 5.4T tokens. Fig 2 shows how the subsamples were created for the Fineweb baselines as well for GneissWeb.
53
 
54
- <img src="ablation_strategy_new.png" alt="ablation_strategy.png" style="width:1000px;"/>
55
 
56
  **Figure 2:** Subsampling and Ablation Strategy
57
 
 
39
 
40
  These were applied in the order shown in the Fig 1
41
 
42
+ <img src="GneissWeb_recipe_new.png" alt="GneissWeb_recipe_new" style="width:1000px;"/>
43
 
44
  **Figure 1 :** GneissWeb recipe
45
 
 
51
 
52
  We used FineWeb 1.1.0 and FineWeb-Edu-score2 as our comparison baselines. The former is of 15T tokens and the latter of 5.4T tokens. Fig 2 shows how the subsamples were created for the Fineweb baselines as well for GneissWeb.
53
 
54
+ <img src="ablation_strategy_new.png" alt="ablation_strategy_new.png" style="width:1000px;"/>
55
 
56
  **Figure 2:** Subsampling and Ablation Strategy
57