prithivMLmods commited on
Commit
31a5f37
·
verified ·
1 Parent(s): 9ffe4b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md CHANGED
@@ -29,3 +29,32 @@ _/ |_ _______ |__|_____ ____ ____ __ __ | | __ __ _____
29
  |__| |__| |__|(____ /|___| /\___ / |____/ |____/|____/ |__|_| /
30
  \/ \//_____/ \/
31
  </pre>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  |__| |__| |__|(____ /|___| /\___ / |____/ |____/|____/ |__|_| /
30
  \/ \//_____/ \/
31
  </pre>
32
+
33
+ # **Triangulum 10B: Multilingual Large Language Models (LLMs)**
34
+
35
+ Triangulum 10B is a collection of pretrained and instruction-tuned generative models, designed for multilingual applications. These models are trained using synthetic datasets based on long chains of thought, enabling them to perform complex reasoning tasks effectively.
36
+
37
+ # **Key Features**
38
+
39
+ - **Foundation Model**: Built upon LLaMA's autoregressive language model, leveraging an optimized transformer architecture for enhanced performance.
40
+
41
+ - **Instruction Tuning**: Includes supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align model outputs with human preferences for helpfulness and safety.
42
+
43
+ - **Multilingual Support**: Designed to handle multiple languages, ensuring broad applicability across diverse linguistic contexts.
44
+
45
+ # **Training Approach**
46
+
47
+ 1. **Synthetic Datasets**: Utilizes long chain-of-thought synthetic data to enhance reasoning capabilities.
48
+ 2. **Supervised Fine-Tuning (SFT)**: Aligns the model to specific tasks through curated datasets.
49
+ 3. **Reinforcement Learning with Human Feedback (RLHF)**: Ensures the model adheres to human values and safety guidelines through iterative training processes.
50
+
51
+ # **Use Cases**
52
+ - Multilingual content generation
53
+ - Question answering and dialogue systems
54
+ - Text summarization and analysis
55
+ - Translation and localization tasks
56
+
57
+ # **Technical Details**
58
+ Triangulum 10B employs a state-of-the-art autoregressive architecture inspired by LLaMA. The optimized transformer framework ensures both efficiency and scalability, making it suitable for a variety of use cases.
59
+
60
+