Dorami-Instruct
Dorami-Instruct is a Supervised Fine-tuning(SFT) model based on the pretrained model lucky2me/Dorami
Model description
Training data
Training code
How to use
1. Download model from Hugging Face Hub to local
git lfs install
git clone https://huggingface.co/lucky2me/Dorami-Instruct
2. Use the model downloaded above
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig
model_path = "The path of the model downloaded above"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path)
prompt="fill in any prompt you like."
inputs = tokenizer(prompt, return_tensors="pt")
generation_config = GenerationConfig(max_new_tokens=64, do_sample=True, top_k=2, eos_token_id=model.config.eos_token_id)
outputs = model.generate(**inputs, generation_config=generation_config)
decoded_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)
print(decoded_text)
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.