Fine-tuned LLaMA 3.2 1B Instruct (Merged) β HTML Chunk Generator
This model merges a LoRA adapter into LLaMA 3.2 1B Instruct for structured HTML generation in chunked formats.
Usage (vLLM)
export HF_TOKEN=your_token
vllm serve jasongraydon01/hawkpartners-survey-llama-3.2-1b-instruct --max-model-len 32768
Usage (Transformers)
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("jasongraydon01/hawkpartners-survey-llama-3.2-1b-instruct")
model = AutoModelForCausalLM.from_pretrained("jasongraydon01/hawkpartners-survey-llama-3.2-1b-instruct", torch_dtype="auto")
messages = [{
"role": "user",
"content": "Generate a full HTML article on the topic of..."
}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs.input_ids, max_new_tokens=8192)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 11
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for jasongraydon01/hawkpartners-survey-llama-3.2-1b-instruct
Base model
meta-llama/Llama-3.2-1B-Instruct