Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ model-index:
|
|
27 |
name: Training Loss
|
28 |
---
|
29 |
|
30 |
-
# Gradia FP256 Model
|
31 |
|
32 |
Gradia is an experimental high-precision transformer research project exploring the use of **FP256 (256-bit floating point)** in training language models. This model represents an early proof-of-concept demonstrating ultra-precision training.
|
33 |
|
@@ -111,10 +111,10 @@ If you use Gradia in your research, please cite:
|
|
111 |
```bibtex
|
112 |
@misc{gradia2025,
|
113 |
title={Gradia: Ultra-Precision Language Models with FP256 Training},
|
114 |
-
author={
|
115 |
year={2025},
|
116 |
note={Experimental FP256 transformer implementation},
|
117 |
-
url={https://huggingface.co/
|
118 |
}
|
119 |
```
|
120 |
|
|
|
27 |
name: Training Loss
|
28 |
---
|
29 |
|
30 |
+
# Gradia FP256 Model
|
31 |
|
32 |
Gradia is an experimental high-precision transformer research project exploring the use of **FP256 (256-bit floating point)** in training language models. This model represents an early proof-of-concept demonstrating ultra-precision training.
|
33 |
|
|
|
111 |
```bibtex
|
112 |
@misc{gradia2025,
|
113 |
title={Gradia: Ultra-Precision Language Models with FP256 Training},
|
114 |
+
author={Entelijans, GLCTC Corp},
|
115 |
year={2025},
|
116 |
note={Experimental FP256 transformer implementation},
|
117 |
+
url={https://huggingface.co/ENTELIJANS}
|
118 |
}
|
119 |
```
|
120 |
|