archit11 commited on
Commit
5d2139c
·
verified ·
1 Parent(s): 80d6fdc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # new2
2
+
3
+ This is a LORA adapter fine-tuned on the base model [NousResearch/DeepHermes-3-Llama-3-3B-Preview](https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-3B-Preview).
4
+
5
+ ## Model Details
6
+ - **Base Model:** NousResearch/DeepHermes-3-Llama-3-3B-Preview
7
+ - **Adapter Type:** LORA
8
+ - **Task:** JEE Mathematics 3D Geometry Problem
9
+
10
+ ## Usage
11
+
12
+ ```python
13
+ from transformers import AutoModelForCausalLM, AutoTokenizer
14
+ from peft import PeftModel
15
+ import torch
16
+
17
+ # Load base model and tokenizer
18
+ base_model = "NousResearch/DeepHermes-3-Llama-3-3B-Preview"
19
+ tokenizer = AutoTokenizer.from_pretrained(base_model)
20
+ model = AutoModelForCausalLM.from_pretrained(base_model, torch_dtype=torch.bfloat16)
21
+
22
+ # Load the LoRA adapter
23
+ adapter_model = "AthenaAgent42/new2"
24
+ model = PeftModel.from_pretrained(model, adapter_model)
25
+
26
+ # Example prompt
27
+ prompt = """
28
+ <Your prompt here>
29
+ """
30
+
31
+ # Generate response
32
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)
33
+ outputs = model.generate(input_ids, max_new_tokens=512)
34
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
35
+ print(response)
36
+ ```
37
+
38
+