Model

Model Page: Gemma

  • fine-tuned the google/gemma-2b-it model.

How to Use it

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("carrotter/ko-gemma-2b-it-sft")
model = AutoModelForCausalLM.from_pretrained("carrotter/ko-gemma-2b-it-sft")

chat = [
    { "role": "user", "content": "ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄ 파이썬 μ½”λ“œλ‘œ μ•Œλ €μ€˜" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=100)
print(tokenizer.decode(outputs[0]))

Example Output

<bos><start_of_turn>user
ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄ 파이썬 μ½”λ“œλ‘œ μ•Œλ €μ€˜<end_of_turn>
<start_of_turn>model
λ‹€μŒμ€ ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ„ 파이썬으둜 κ΅¬ν˜„ν•˜λŠ” λ°©λ²•μ˜ μ˜ˆμž…λ‹ˆλ‹€:

def fibonacci(n):
    if n <= 1:
        return n
    else:
        return fibonacci(n-1) + fibonacci(n-2)

이 ν•¨μˆ˜λŠ” n이 ν”Όλ³΄λ‚˜μΉ˜ μˆ˜μ—΄μ˜ λͺ‡ 번째 항인지에 따라 λ°˜ν™˜ν•©λ‹ˆλ‹€. n이 1μ΄κ±°λ‚˜ 2인 경우

Applications

This fine-tuned model is particularly suited for [mention applications, e.g., chatbots, question-answering systems, etc.]. Its enhanced capabilities ensure more accurate and contextually appropriate responses in these domains.

Limitations and Considerations

While our fine-tuning process has optimized the model for specific tasks, it's important to acknowledge potential limitations. The model's performance can still vary based on the complexity of the task and the specificities of the input data. Users are encouraged to evaluate the model thoroughly in their specific context to ensure it meets their requirements.

Downloads last month
53
Safetensors
Model size
2.51B params
Tensor type
FP16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CarrotAI/ko-gemma-2b-it-sft

Quantizations
1 model

Dataset used to train CarrotAI/ko-gemma-2b-it-sft