Tessa-T1-14B / README.md
smirki's picture
Update README.md
62aab27 verified
|
raw
history blame
3.9 kB
metadata
base_model: Qwen/Qwen2.5-Coder-14B-Instruct
tags:
  - text-generation-inference
  - transformers
  - qwen2
  - trl
license: apache-2.0
language:
  - en
datasets:
  - Tesslate/Tessa-T1-Dataset

πŸš€ Model Card for Tess-T1


🌟 Model Overview

image/png "Landing Page"

Tess-T1 is an innovative transformer-based React reasoning model, fine-tuned from the powerful Qwen2.5-Coder-14B-Instruct base model. Designed specifically for React frontend development, Tess-T1 leverages advanced reasoning to autonomously generate well-structured, semantic React components. Its integration into agent systems makes it a powerful tool for automating web interface development and frontend code intelligence.


🎯 Model Highlights

  • βœ… React-specific Reasoning: Accurately generates functional and semantic React components.
  • βœ… Agent Integration: Seamlessly fits into AI-driven coding agents and autonomous frontend systems.
  • βœ… Context-Aware Generation: Effectively understands and utilizes UI context to provide relevant code solutions.

πŸ“Έ Example Outputs

See examples demonstrating the powerful reasoning and component creation capabilities of Tess-T1:

image/png

Make a functioning AI training waitlist

image/png

Prompt: "add in a calendar"

image/png


πŸ› οΈ Use Cases

βœ… Recommended Uses

  • Automatic Component Generation: Quickly produce React components from textual prompts.
  • Agent-based Web Development: Integrate into automated coding systems for faster frontend workflows.
  • Frontend Refactoring: Automate the optimization and semantic enhancement of React code.

⚠️ Limitations

  • Focused on React: Limited use outside React.js frameworks.
  • Complex State Management: May require manual adjustments for highly dynamic state management scenarios.

πŸ“¦ How to Use

Inference Example

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "smirki/Tess-T1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to("cuda")

prompt = """<|im_start|>user
Create a React component for a user profile card.<|im_end|>
<|im_start|>assistant
<|im_start|>think
"""

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=1500, do_sample=True, temperature=0.7)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ“Š Performance and Evaluation

  • Strengths:

    • Strong semantic React component generation.
    • Excellent integration capabilities with agent-based systems.
  • Weaknesses:

    • Complex JavaScript logic may require manual post-processing.

πŸ’» Technical Specifications

  • Architecture: Transformer-based LLM
  • Base Model: Qwen2.5-Coder-14B-Instruct
  • Precision: bf16 mixed precision, quantized to q8
  • Hardware Requirements: Recommended 12GB VRAM
  • Software Dependencies:
    • Hugging Face Transformers
    • PyTorch

πŸ“– Citation

@misc{smirki_Tess-T1,
  title={Tess-T1: React-Focused Reasoning Model for Component Generation},
  author={tesslate},
  year={2025},
  publisher={Hugging Face},
  url={https://huggingface.co/tesslate/Tess-T1}
}

🀝 Contact & Community

  • Creator: smirki
  • Repository & Demo: Coming soon!