nehulagrawal commited on
Commit
e84d3d1
·
verified ·
1 Parent(s): f50b1b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +131 -3
README.md CHANGED
@@ -1,3 +1,131 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ tags:
7
+ - Reasoning
8
+ - React
9
+ - COT
10
+ - MachineLearning
11
+ - DeepLearning
12
+ - FineTuning
13
+ - NLP
14
+ - AIResearch
15
+ ---
16
+
17
+ # Think-and-Code-React
18
+
19
+ ## Table of Contents
20
+ 1. [Introduction](#introduction)
21
+ 2. [Problem Statement](#problem-statement)
22
+ 3. [Solution](#solution)
23
+ 4. [How It Works](#how-it-works)
24
+ 5. [How to Use This Model](#how-to-use-this-model)
25
+ 6. [Future Developments](#future-developments)
26
+ 7. [License](#license)
27
+ 8. [Model Card Contact](#model-card-contact)
28
+
29
+ ## Introduction
30
+
31
+ This is a fine-tuned Qwen model which is designed to provide frontend development solutions with enhanced reasoning capabilities on ReactJS. It can writes code after reasoning task then provide some best prectice after ansering.
32
+
33
+ ## Problem Statement
34
+
35
+ Coading is a challenging task for small models. small models as not enough capable for writing code with heigh accuracy and reasoning whare React is widely used javascript liberary and most time we found that small LLM are not very specific to programming:
36
+
37
+ ## Solution
38
+
39
+ Training LLM for React specific dataset and enable reasoning task. This LLM provides us cold start with React based LLM whare it does understand many react concept.:
40
+
41
+ 1. Understands user's query
42
+ 2. Evaluate Everything in <think> tag
43
+ 3. Provide answer in <answer>
44
+ 4. Additionally provide Best Prectices in <verifier_answer>
45
+
46
+
47
+ ## How It Works
48
+
49
+ 1. **Data Collection**: The model is trained on (Microsoft's Phi4) model output whare 1000s of react specific senerios. it does provide us cold start with good reasoning capabilities
50
+
51
+ 2. **Feature Extraction**: Upscalling it using RL to enable model with heigh level of accuracy and better output for reasoning.l
52
+
53
+ 3. **Machine Learning**: A sophisticated machine learning algorithm is employed to learn the heigh quality code in React Specific code and can be expand to all freamwork.
54
+
55
+ ## How to Use This Model
56
+
57
+ ### Prerequisites
58
+
59
+ - Python 3.7 or higher
60
+ - Required libraries (install via pip):
61
+ ```bash
62
+ pip install torch transformers
63
+ ```
64
+
65
+ ### Installation
66
+
67
+ 1. Clone this repository:
68
+ ```bash
69
+ git clone https://huggingface.co/nehulagrawal/baby-cry-classification
70
+ cd baby-cry-classifier
71
+ ```
72
+
73
+ 2. Download the pre-trained model files:
74
+ 'model.joblib'
75
+ 'label.joblib'
76
+
77
+ ### Usage
78
+
79
+ 1. Import the necessary libraries:
80
+
81
+ ```python
82
+ import torch
83
+ from transformers import AutoTokenizer, AutoModelForCausalLM
84
+ ```
85
+
86
+ 2. Setting up models:
87
+
88
+ ```python
89
+ model_path = "./Path-to-llm-folder"
90
+
91
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
92
+ model = AutoModelForCausalLM.from_pretrained(model_path)
93
+
94
+ device = "cuda" if torch.cuda.is_available() else "cpu"
95
+ model.to(device)
96
+ ```
97
+
98
+ 3. Setting up AI Response:
99
+
100
+ ```python
101
+ def generate_text(prompt, max_length=2000):
102
+ inputs = tokenizer(prompt, return_tensors="pt").to(device)
103
+ output = model.generate(
104
+ **inputs,
105
+ do_sample=True,
106
+ temperature=0.7
107
+ )
108
+ return tokenizer.decode(output[0], skip_special_tokens=True)```
109
+
110
+ 4. Using LLM:
111
+
112
+ ```python
113
+ prompt = "Write a code in react for calling api to server at https://example.com/test"
114
+ generated_text = generate_text(prompt)
115
+
116
+ print(generated_text)```
117
+
118
+ ## Future Developments
119
+ This is a cold start LLM and can be enhance it's capabalities using RL so that this LLM perform more better. Currently we have found
120
+
121
+ ## Model Card Contact
122
+
123
+ For inquiries and contributions, please contact us at [email protected].
124
+
125
+ ```bibtex
126
+ @ModelCard{
127
+ author = {Nehul Agrawal, Priyal Mata and Ayush Panday},
128
+ title = {Think and Code in React},
129
+ year = {2025}
130
+ }
131
+ ```