Oracle Fusion Cloud SCM - DoRA Adapter
Bu Oracle Fusion Cloud SCM konularında uzmanlaşmış DoRA (Weight-Decomposed Low-Rank Adaptation) adapter'ıdır.
🎯 Kullanım
Google Colab'da Merge:
# 1. Base model yükle
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
base_model_name = "unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit"
adapter_name = "ozkurt7/oracle-deepseek-r1-adapter"
# 2. Model ve adapter yükle
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_name,
torch_dtype=torch.float16,
device_map="auto"
)
model = PeftModel.from_pretrained(base_model, adapter_name)
# 3. Merge işlemi
merged_model = model.merge_and_unload()
# 4. Kaydet
merged_model.save_pretrained("./oracle-merged")
tokenizer.save_pretrained("./oracle-merged")
# 5. Test
messages = [
{"role": "system", "content": "You are an Oracle Fusion Cloud SCM expert."},
{"role": "user", "content": "What is Oracle SCM Cloud?"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
outputs = merged_model.generate(**inputs, max_new_tokens=200, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Local kullanım:
# Adapter'ı indir
git clone https://huggingface.co/ozkurt7/oracle-deepseek-r1-adapter
# Python'da merge
python merge_adapter.py
📊 Model Details
- Base Model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
- Technique: DoRA (Weight-Decomposed Low-Rank Adaptation)
- Domain: Oracle Fusion Cloud SCM
- Status: Adapter only (merge required)
- Memory: ~500MB (adapter only)
🚀 Next Steps
- Google Colab'da bu adapter'ı kullanarak merge yapın
- Merge edilen modeli yeni repo'ya upload edin
- GGUF formatına dönüştürün
🛠️ Troubleshooting
- Memory Error: Colab Pro kullanın veya local'de merge yapın
- Loading Error:
trust_remote_code=True
ekleyin - CUDA Error:
device_map="auto"
kullanın
Created by: Kaggle → Google Colab workflow Date: 2025-08-12
- Downloads last month
- 18
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for ozkurt7/oracle-deepseek-r1-adapter
Base model
deepseek-ai/DeepSeek-R1-Distill-Llama-8B