Model Card for Model ID
Model Details
Model Description
import os
from transformers import pipeline
import torch
import re
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
base_dir = os.getcwd()
model_id = os.path.join(base_dir,"models" ,"llama-3.2-3b-fine-tune")
print(model_id)
pipe = pipeline("text-generation", model=model_id, torch_dtype=torch.bfloat16, device_map=device)
messages = [
{"role": "user", "content": "How can I protect myself from HIV and STIs during sex?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=500, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
answer = outputs[0]["generated_text"]
print(answer)
- Downloads last month
- 34
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
1
Ask for provider support