we fine-tune this model based on mistral-7b-v0.1

Load model directly

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("devhyun88/hyun-mistral-7b-orca-platypus-refine") model = AutoModelForCausalLM.from_pretrained("devhyun88/hyun-mistral-7b-orca-platypus-refine")

Downloads last month
4
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for devhyun88/hyun-mistral-7b-orca-platypus-refine

Quantizations
1 model

Spaces using devhyun88/hyun-mistral-7b-orca-platypus-refine 6