Model
Model Page: Gemma
- fine-tuned the google/gemma-2b-it model.
How to Use it
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("carrotter/ko-gemma-2b-it-sft")
model = AutoModelForCausalLM.from_pretrained("carrotter/ko-gemma-2b-it-sft")
chat = [
{ "role": "user", "content": "νΌλ³΄λμΉ μμ΄ νμ΄μ¬ μ½λλ‘ μλ €μ€" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
Example Output
<bos><start_of_turn>user
νΌλ³΄λμΉ μμ΄ νμ΄μ¬ μ½λλ‘ μλ €μ€<end_of_turn>
<start_of_turn>model
λ€μμ νΌλ³΄λμΉ μμ΄μ νμ΄μ¬μΌλ‘ ꡬννλ λ°©λ²μ μμ
λλ€:
def fibonacci(n):
if n <= 1:
return n
else:
return fibonacci(n-1) + fibonacci(n-2)
μ΄ ν¨μλ nμ΄ νΌλ³΄λμΉ μμ΄μ λͺ λ²μ§Έ νμΈμ§μ λ°λΌ λ°νν©λλ€. nμ΄ 1μ΄κ±°λ 2μΈ κ²½μ°
Applications
This fine-tuned model is particularly suited for [mention applications, e.g., chatbots, question-answering systems, etc.]. Its enhanced capabilities ensure more accurate and contextually appropriate responses in these domains.
Limitations and Considerations
While our fine-tuning process has optimized the model for specific tasks, it's important to acknowledge potential limitations. The model's performance can still vary based on the complexity of the task and the specificities of the input data. Users are encouraged to evaluate the model thoroughly in their specific context to ensure it meets their requirements.
- Downloads last month
- 53
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support