ViCoder-html-32B-preview
π A powerful HTML/CSS/JS sketching model powered by Qwen2.5-Coder-32B-Instruct π
Developed by Vichar AI | Hugging Face Profile
Licensed under Apache 2.0
π‘ What is ViCoder-html-32B-preview?
ViCoder-html-32B-preview is a preview model in the ViCoder series from Vichar AI β a line of models specialized in code generation. This model focuses specifically on sketching single-page websites, such as landing pages and dashboards, using using:
- π§ HTML for semantic structure
- π¨ Tailwind CSS for modern, utility-first styling
- βοΈ JavaScript for interactivity and basic dynamic behavior
This model is ideal for:
- Web Developers: Quickly scaffolding dashboards or page layouts.
- Frontend Engineers: Prototyping UIs and exploring design variations.
- Designers: Turning textual mockups into initial code sketches.
- Educators & Students: Learning and experimenting with HTML, Tailwind CSS, and JavaScript in a practical context.
β οΈ Note: This is a preview version. It demonstrates core capabilities but is still under active development. A more refined and robust production release is planned. Stay updated via vichar.io or follow VicharAI on Hugging Face!
π οΈ Model Details
Property | Value |
---|---|
Model Type | Code Generation (Instruction-tuned Language Model) |
Base Model | Qwen/Qwen2.5-Coder-32B-Instruct |
Developed by | Vichar AI (HF Profile) |
Languages | Primarily HTML, Tailwind CSS, JavaScript. Understands English instructions. |
Training Data | Proprietary curated dataset focusing on high-quality web components and pages. |
License | Apache 2.0 |
Library | π€ Transformers |
Contact | Visit vichar.io or use HF Discussions |
π§± GGUF Quantized Versions
Quantized versions of ViCoder-html-32B-preview in GGUF format are available for efficient local inference using llama.cpp, LM Studio, or Ollama.
You can find them here:
These quantized variants (Q3_K_M, Q4_K_M, Q6_K, Q8_0) are useful for running the model on lower-memory hardware or for embedding in desktop/web applications.
β‘ Example Usage
Use the transformers
library pipeline for easy text generation. Ensure you have transformers
, torch
, and accelerate
installed.
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
# Define model ID
model_id = "VicharAI/ViCoder-html-32B-preview"
# Load tokenizer and model
# Use bfloat16 for faster inference if your GPU supports it
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16, # Or torch.float16 if bfloat16 is not supported
device_map="auto" # Automatically distribute across available GPUs/CPU
)
messages = [
{"role": "user", "content": "A modern, sleek landing page for a company focusing on open-source LLM solutions"},
]
inputs = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True,
return_tensors = "pt",
).to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 16000,
use_cache = True, temperature = 0.7, min_p = 0.1, repetition_penalty=1.1)
β¨ Output Sample
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Our Love Story - Surprise Website</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<!-- Tailwind CSS CDN -->
<script src="https://cdn.tailwindcss.com"></script>
<style>
/* Custom animation classes */...
(Note: The model aims to generate complete HTML structures with Tailwind classes. Review and adapt generated code as needed.)
π§ͺ Evaluation & Limitations
As a preview release, this model has undergone initial internal testing focused on:
- Code Correctness: Validity of generated HTML, Tailwind CSS classes, and basic JavaScript snippets.
- Tailwind CSS Usage: Adherence to Tailwind's utility-first principles and common patterns.
- Component Structure: Logical organization of HTML elements for typical web components.
- Instruction Following: Ability to understand and implement requirements from the prompt.
Current Limitations:
- No Formal Benchmarks: Not yet evaluated on standard code generation benchmarks (e.g., HumanEval-X, MBPP).
- Complex Logic: May struggle with complex JavaScript logic, state management, or intricate CSS beyond Tailwind utilities.
- Hallucination Risk: Like all LLMs, it can sometimes generate incorrect, incomplete, or non-optimal code. Always review the output.
- Preview Status: Not recommended for critical production use without thorough validation.
π Roadmap
The ViCoder series is an ongoing project at Vichar AI. Our current roadmap includes:
- β ViCoder-html-32B-preview: Initial public preview release (this model).
- β³ ViCoder-html-32B (v1.0): Planned production-ready release with improved training data, fine-tuning, and evaluation.
- π ViCoder-js-32B: Future model focusing specifically on advanced JavaScript generation (frameworks, logic).
- π ViCoder-python-32B: Potential companion model for Python backend code generation.
- π Benchmarking & Evaluation: Formal evaluation on relevant code generation benchmarks.
Follow VicharAI on Hugging Face or check the Vichar AI website for announcements!
π License
This model and its code are licensed under the Apache License 2.0. You can find the full license text here.
π Citation
If you use ViCoder-html-32B-preview in your projects, publications, or research, please cite it:
@misc{vicharai_vicoder_html_32b_preview_2025,
title = {ViCoder-html-32B-preview: A Preview Model for HTML/Tailwind CSS/JavaScript Sketching},
author = {Vichar AI},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/VicharAI/ViCoder-html-32B-preview},
url = {https://vichar.io}
}
π¬ Get in Touch
We welcome feedback, questions, and collaboration ideas!
- Hugging Face: Open an issue or start a discussion on the model page's Community tab.
- Website: Visit us at https://vichar.io for more information about Vichar AI and the ViCoder project.
- Contact: Find direct contact methods on the Vichar AI website.
π€ Acknowledgments
This project builds upon the incredible work of others:
- SprykAI for their support during model experimentation phases.
- The Qwen Team at Alibaba Cloud for developing the foundational Qwen2.5-Coder-32B-Instruct model.
- The Hugging Face Team for their platform and libraries (π€ Transformers, Accelerate,TRL).
- The broader open-source AI community for continuous innovation and shared knowledge.
- Development efforts by the team at Vichar AI.
π₯ This preview is just the start! Explore, build, and stay tuned for the full ViCoder suite from Vichar AI! π₯
- Downloads last month
- 21
Model tree for vicharai/ViCoder-html-32B-preview
Base model
Qwen/Qwen2.5-32B