Edit model card

Python code with Pipeline

import transformers
import torch

model_id = "VIRNECT/llama-3-Korean-8B-V2"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

pipeline.model.eval()

PROMPT = '''๋‹น์‹ ์€ ์ธ๊ฐ„๊ณผ ๋Œ€ํ™”ํ•˜๋Š” ์นœ์ ˆํ•œ ์ฑ—๋ด‡์ž…๋‹ˆ๋‹ค. ์งˆ๋ฌธ์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ์ƒํ™ฉ์— ๋งž๊ฒŒ ์ž์„ธํžˆ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋‹น์‹ ์ด ์งˆ๋ฌธ์— ๋Œ€ํ•œ ๋‹ต์„ ๋ชจ๋ฅธ๋‹ค๋ฉด, ์‚ฌ์‹ค์€ ๋ชจ๋ฅธ๋‹ค๊ณ  ๋งํ•ฉ๋‹ˆ๋‹ค.'''
instruction = "ํ™”ํ•™๊ณตํ•™์ด ๋‹ค๋ฅธ ๊ณตํ•™ ๋ถ„์•ผ์™€ ์–ด๋–ป๊ฒŒ ๋‹ค๋ฅธ๊ฐ€์š”?"

messages = [
    {"role": "system", "content": f"{PROMPT}"},
    {"role": "user", "content": f"{instruction}"}
]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=2048,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9
)

print(outputs[0]["generated_text"][len(prompt):])
Downloads last month
1,946
Safetensors
Model size
8.03B params
Tensor type
FP16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for VIRNECT/llama-3-Korean-8B-V2

Quantizations
1 model

Spaces using VIRNECT/llama-3-Korean-8B-V2 5