Kullanım
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Eurdem/Pinokio_v1.0"
#tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16, "load_in_8bit": True},
)
messages = [{"role": "user", "content": """Her gün 30 km koşarsam, 270 km yolu kaç günde koşabilirim?"""}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, do_sample=True, temperature=0.7, top_k=500, top_p=0.7,max_new_tokens=1024)
print(outputs[0]["generated_text"])
- Downloads last month
- 2,768
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.