Transformers
English
fp256
ultra-precision
transformer
experimental
research
Eval Results
FRTR4N commited on
Commit
d73ed35
·
verified ·
1 Parent(s): 187aaa9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -27,7 +27,7 @@ model-index:
27
  name: Training Loss
28
  ---
29
 
30
- # Gradia FP256 Model — Step 10 Checkpoint
31
 
32
  Gradia is an experimental high-precision transformer research project exploring the use of **FP256 (256-bit floating point)** in training language models. This model represents an early proof-of-concept demonstrating ultra-precision training.
33
 
@@ -111,10 +111,10 @@ If you use Gradia in your research, please cite:
111
  ```bibtex
112
  @misc{gradia2025,
113
  title={Gradia: Ultra-Precision Language Models with FP256 Training},
114
- author={The Gradia Project Contributors},
115
  year={2025},
116
  note={Experimental FP256 transformer implementation},
117
- url={https://huggingface.co/Gradia}
118
  }
119
  ```
120
 
 
27
  name: Training Loss
28
  ---
29
 
30
+ # Gradia FP256 Model
31
 
32
  Gradia is an experimental high-precision transformer research project exploring the use of **FP256 (256-bit floating point)** in training language models. This model represents an early proof-of-concept demonstrating ultra-precision training.
33
 
 
111
  ```bibtex
112
  @misc{gradia2025,
113
  title={Gradia: Ultra-Precision Language Models with FP256 Training},
114
+ author={Entelijans, GLCTC Corp},
115
  year={2025},
116
  note={Experimental FP256 transformer implementation},
117
+ url={https://huggingface.co/ENTELIJANS}
118
  }
119
  ```
120