nehulagrawal's picture
Update README.md
b6582e2 verified
|
raw
history blame
3.6 kB
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- Reasoning
- React
- COT
- MachineLearning
- DeepLearning
- FineTuning
- NLP
- AIResearch
---
# Think-and-Code-React
## Table of Contents
1. [Introduction](#introduction)
2. [Problem Statement](#problem-statement)
3. [Solution](#solution)
4. [How It Works](#how-it-works)
5. [How to Use This Model](#how-to-use-this-model)
6. [Future Developments](#future-developments)
7. [License](#license)
8. [Model Card Contact](#model-card-contact)
## Introduction
This is a fine-tuned Qwen model which is designed to provide frontend development solutions with enhanced reasoning capabilities on ReactJS. It can writes code after reasoning task then provide some best prectice after ansering.
## Problem Statement
Coading is a challenging task for small models. small models as not enough capable for writing code with heigh accuracy and reasoning whare React is widely used javascript liberary and most time we found that small LLM are not very specific to programming:
## Solution
Training LLM for React specific dataset and enable reasoning task. This LLM provides us cold start with React based LLM whare it does understand many react concept.:
1. Understands user's query
2. Evaluate Everything in <think> tag
3. Provide answer in <answer>
4. Additionally provide Best Prectices in <verifier_answer>
## How It Works
1. **Data Collection**: The model is trained on (Microsoft's Phi4) model output whare 1000s of react specific senerios. it does provide us cold start with good reasoning capabilities
2. **Feature Extraction**: Upscalling it using RL to enable model with heigh level of accuracy and better output for reasoning.l
3. **Machine Learning**: A sophisticated machine learning algorithm is employed to learn the heigh quality code in React Specific code and can be expand to all freamwork.
## How to Use This Model
### Prerequisites
- Python 3.7 or higher
- Required libraries (install via pip):
```bash
pip install torch transformers
```
### Installation
1. Clone this repository:
```bash
git clone https://huggingface.co/nehulagrawal/baby-cry-classification
cd baby-cry-classifier
```
2. Download the pre-trained model files:
'model.joblib'
'label.joblib'
### Usage
1. Import the necessary libraries:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
```
2. Setting up models:
```python
model_path = "./Path-to-llm-folder"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
```
3. Setting up AI Response:
```python
def generate_text(prompt, max_length=2000):
inputs = tokenizer(prompt, return_tensors="pt").to(device)
output = model.generate(
**inputs,
do_sample=True,
temperature=0.7
)
return tokenizer.decode(output[0], skip_special_tokens=True)
```
4. Using LLM:
```python
prompt = "Write a code in react for calling api to server at https://example.com/test"
generated_text = generate_text(prompt)
print(generated_text)
```
## Future Developments
This is a cold start LLM and can be enhance it's capabalities using RL so that this LLM perform more better. Currently we have found
## Model Card Contact
For inquiries and contributions, please contact us at [email protected].
```bibtex
@ModelCard{
author = {Nehul Agrawal, Priyal Mata and Ayush Panday},
title = {Think and Code in React},
year = {2025}
}
```