YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

new2

This is a LORA adapter fine-tuned on the base model NousResearch/DeepHermes-3-Llama-3-3B-Preview.

Model Details

  • Base Model: NousResearch/DeepHermes-3-Llama-3-3B-Preview
  • Adapter Type: LORA
  • Task: JEE Mathematics 3D Geometry Problem

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

# Load base model and tokenizer
base_model = "NousResearch/DeepHermes-3-Llama-3-3B-Preview"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype=torch.bfloat16)

# Load the LoRA adapter
adapter_model = "AthenaAgent42/new2"
model = PeftModel.from_pretrained(model, adapter_model)

# Example prompt
prompt = """
<Your prompt here>
"""

# Generate response
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)
outputs = model.generate(input_ids, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support