File size: 4,517 Bytes
cfef5c1
 
ade9e46
cfef5c1
 
ade9e46
 
cfef5c1
112a732
cfef5c1
ade9e46
 
 
 
 
 
cfef5c1
 
 
 
 
6ad379a
 
 
cfef5c1
6ad379a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cfef5c1
62de7e6
8062621
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62de7e6
 
 
 
 
 
 
 
 
 
cfef5c1
 
6ad379a
cfef5c1
 
 
 
 
 
 
 
4309bf8
8062621
 
 
194d7f6
8062621
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-4B-Base
tags:
- llm
- indic
model-index:
- name: Hex-1
  results: []
language:
- hi
- te
- ta
- ml
- kn
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

<div align="center">
  <img src="https://budecosystem.alwaysdata.net/wp-content/uploads/2025/05/hex1-llm-indic.png">
</div>

India, being one of the most linguistically diverse nations in the world, faces a major roadblock in harnessing the full potential of Generative AI. With only about 10% of the population fluent in English, the remaining 90% are effectively left behind—unable to engage with GenAI tools that are predominantly built for English-speaking users.

Most leading language models today are trained using the English language, offering little to no support for Indian languages. As a result, the depth and richness of India’s linguistic and cultural heritage are being overlooked by this global AI wave—leaving billions underserved and underrepresented. To address this gap, we need language models that are;

Proficient in Indic languages
Open-source, making it available to researchers, developers, and the public
Offers a commercial license, allowing businesses to freely build applications, tools, and services without restrictive usage terms
Hex1: Indic LLM Built for India

Hex1 is a 4B parameter language model specifically optimized for Indian languages. It is designed to bridge the linguistic AI gap in India by enabling developers to build intelligent systems that understand and respond in native Indian languages. In its first release, Hex1 supports five major Indian languages, including Hindi, Kannada, Telugu, Tamil and Malayalam.  Future versions of the model are set to expand support to more languages, broadening its usability across the Indian subcontinent.


When benchmarked against leading models like Gemma-2B, LLaMA-3.2-3B, and Sarvam-1, Hex1 delivers best-in-class performance in all five supported languages for MMLU benchmark. This makes it one of the most capable models currently available for Indic language tasks.


<div align="center">
  <img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXfOWAfktE9_XdRl7UY-8tCBaY1n-myJb9UQvIKBnsagD3hBpOu28fi5LGupKjM6o-CxvozuPpGYATk0aRBDFNADwAfy8uB4S1M9SPycWDDf1VmV5Co9KPXR1_FMMAFV54DkB6uO?key=Z4vPtKGJIGf83PmLrJX9RY3I">
</div>


## Quickstart


The following contains a code snippet illustrating how to use the model generate content based on given inputs. 

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "budecosystem/hex-1"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# prepare the model input
prompt = "பொங்கல் என்றால் என்ன?."
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() 

content = tokenizer.decode(output_ids, skip_special_tokens=True).strip("\n")

print("content:", content)
```


### Training results - Multilingual Task Performance Comparison

| Language   | Hellaswag | ARC-c  | ARC-e  | MMLU   | BoolQ  |
|------------|-----------|--------|--------|--------|--------|
| Hindi      | 47.85     | 36.68  | 52.14  | 46.73  | 57.61  |
| Tamil      | 49.45     | 38.65  | 53.45  | 44.71  | 45.87  |
| Telugu     | 50.84     | 37.96  | 53.36  | 46.85  | 51.89  |
| Kannada    | 52.16     | 38.31  | 53.11  | 46.38  | 52.32  |
| Malayalam  | 46.32     | 29.60  | 40.86  | 43.63  | 46.69  |

### Training hyperparameters


The following hyperparameters were used during training:
- learning_rate: 1e-05
- seed: 42
- distributed_type: multi-GPU
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0



### Aknowledgements

Our heartfelt thanks go to the open-source community and the trailblazers in AI research whose work has paved the way for innovations. Special shout out to the Qwen3 team for the open-source model.