dittops commited on
Commit
8062621
·
verified ·
1 Parent(s): 62de7e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md CHANGED
@@ -43,6 +43,49 @@ When benchmarked against leading models like Gemma-2B, LLaMA-3.2-3B, and Sarvam-
43
  </div>
44
 
45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
  ### Training results - Multilingual Task Performance Comparison
47
 
48
  | Language | Hellaswag | ARC-c | ARC-e | MMLU | BoolQ |
@@ -65,3 +108,8 @@ The following hyperparameters were used during training:
65
  - num_epochs: 3.0
66
 
67
 
 
 
 
 
 
 
43
  </div>
44
 
45
 
46
+ ## Quickstart
47
+
48
+
49
+ The following contains a code snippet illustrating how to use the model generate content based on given inputs.
50
+
51
+ ```python
52
+ from transformers import AutoModelForCausalLM, AutoTokenizer
53
+
54
+ model_name = "budecosystem/hex-1"
55
+
56
+ # load the tokenizer and the model
57
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
58
+ model = AutoModelForCausalLM.from_pretrained(
59
+ model_name,
60
+ torch_dtype="auto",
61
+ device_map="auto"
62
+ )
63
+
64
+ # prepare the model input
65
+ prompt = "பொங்கல் என்றால் என்ன?."
66
+ messages = [
67
+ {"role": "user", "content": prompt}
68
+ ]
69
+ text = tokenizer.apply_chat_template(
70
+ messages,
71
+ tokenize=False,
72
+ add_generation_prompt=True,
73
+ )
74
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
75
+
76
+ # conduct text completion
77
+ generated_ids = model.generate(
78
+ **model_inputs,
79
+ max_new_tokens=32768
80
+ )
81
+ output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
82
+
83
+ content = tokenizer.decode(output_ids, skip_special_tokens=True).strip("\n")
84
+
85
+ print("content:", content)
86
+ ```
87
+
88
+
89
  ### Training results - Multilingual Task Performance Comparison
90
 
91
  | Language | Hellaswag | ARC-c | ARC-e | MMLU | BoolQ |
 
108
  - num_epochs: 3.0
109
 
110
 
111
+
112
+ ### Aknowledgements
113
+
114
+ Our heartfelt thanks go to the open-source community and the trailblazers in AI research whose work has paved the way for innovations
115
+