dittops commited on
Commit
6ad379a
·
verified ·
1 Parent(s): 05724aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -11
README.md CHANGED
@@ -19,13 +19,32 @@ language:
19
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
  should probably proofread and complete it, then remove this comment. -->
21
 
22
- # Hex-1
 
 
23
 
24
- Hex-1 is a 4-billion parameter language model specifically optimized for Indian languages. It supports five major Indian languages, including Hindi, Kannada, Telugu, Tamil and Malayalam.
25
- When benchmarked against leading models like Gemma-2B, LLaMA-3.2-3B, and Sarvam-1, Hex1 delivers best-in-class performance in all five supported languages on MMLU dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ### Training hyperparameters
28
 
 
29
  The following hyperparameters were used during training:
30
  - learning_rate: 1e-05
31
  - train_batch_size: 8
@@ -50,11 +69,3 @@ The following hyperparameters were used during training:
50
  | Kannada | 52.16 | 38.31 | 53.11 | 46.38 | 52.32 |
51
  | Malayalam | 46.32 | 29.60 | 40.86 | 43.63 | 46.69 |
52
 
53
-
54
-
55
- ### Framework versions
56
-
57
- - Transformers 4.51.3
58
- - Pytorch 2.7.0+cu126
59
- - Datasets 3.5.0
60
- - Tokenizers 0.21.1
 
19
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
  should probably proofread and complete it, then remove this comment. -->
21
 
22
+ <div align="center">
23
+ <img src="https://budecosystem.alwaysdata.net/wp-content/uploads/2025/05/hex1-llm-indic.png">
24
+ </div>
25
 
26
+ India, being one of the most linguistically diverse nations in the world, faces a major roadblock in harnessing the full potential of Generative AI. With only about 10% of the population fluent in English, the remaining 90% are effectively left behind—unable to engage with GenAI tools that are predominantly built for English-speaking users.
27
+
28
+ Most leading language models today are trained using the English language, offering little to no support for Indian languages. As a result, the depth and richness of India’s linguistic and cultural heritage are being overlooked by this global AI wave—leaving billions underserved and underrepresented. To address this gap, we need language models that are;
29
+
30
+ Proficient in Indic languages
31
+ Open-source, making it available to researchers, developers, and the public
32
+ Offers a commercial license, allowing businesses to freely build applications, tools, and services without restrictive usage terms
33
+ Hex1: Indic LLM Built for India
34
+
35
+ Hex1 is a 4B parameter language model specifically optimized for Indian languages. It is designed to bridge the linguistic AI gap in India by enabling developers to build intelligent systems that understand and respond in native Indian languages. In its first release, Hex1 supports five major Indian languages, including Hindi, Kannada, Telugu, Tamil and Malayalam. Future versions of the model are set to expand support to more languages, broadening its usability across the Indian subcontinent.
36
+
37
+
38
+ When benchmarked against leading models like Gemma-2B, LLaMA-3.2-3B, and Sarvam-1, Hex1 delivers best-in-class performance in all five supported languages for MMLU benchmark. This makes it one of the most capable models currently available for Indic language tasks.
39
+
40
+
41
+ <div align="center">
42
+ <img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfOWAfktE9_XdRl7UY-8tCBaY1n-myJb9UQvIKBnsagD3hBpOu28fi5LGupKjM6o-CxvozuPpGYATk0aRBDFNADwAfy8uB4S1M9SPycWDDf1VmV5Co9KPXR1_FMMAFV54DkB6uO?key=Z4vPtKGJIGf83PmLrJX9RY3I">
43
+ </div>
44
 
45
  ### Training hyperparameters
46
 
47
+
48
  The following hyperparameters were used during training:
49
  - learning_rate: 1e-05
50
  - train_batch_size: 8
 
69
  | Kannada | 52.16 | 38.31 | 53.11 | 46.38 | 52.32 |
70
  | Malayalam | 46.32 | 29.60 | 40.86 | 43.63 | 46.69 |
71