Ingredient Selection Agent (GGUF)
๋ชจ๋ธ ์ค๋ช
๊ฑด๊ฐ ์ํ ๋ถ์ ๊ฒฐ๊ณผ๋ฅผ ๋ฐํ์ผ๋ก ์ต์ ์ ์์ฌ๋ฃ๋ฅผ ์ถ์ฒํ๋ ์์ ์ ๋ฌธ AI
์ฃผ์ ๊ธฐ๋ฅ: ๊ฑด๊ฐ์ํ ๋ถ์ โ ์์ฌ๋ฃ ์ถ์ฒ
๊ธฐ์ ์ธ๋ถ์ฌํญ
- Base Model: MLP-KTLim/llama-3-Korean-Bllossom-8B
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Quantization: 8-bit during training
- Format: GGUF
- Language: Korean (ํ๊ตญ์ด)
์ฌ์ฉ ๋ฐฉ๋ฒ
GGUF ๋ฒ์
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("carefood/agent2-health-analysis-8bgguf")
model = AutoModelForCausalLM.from_pretrained("carefood/agent2-health-analysis-8bgguf")
# ์ฌ์ฉ ์์
prompt = "์ค๋ฌธ์กฐ์ฌ ๊ฒฐ๊ณผ๋ฅผ ๋ถ์ํด์ฃผ์ธ์..."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
ํ๋ จ ๋ฐ์ดํฐ
- ํ๊ตญ์ธ ๊ฑด๊ฐ ์ค๋ฌธ์กฐ์ฌ ๋ฐ์ดํฐ
- ์ง๋ณ๋ณ ์์ ์ ๋ณด
- ์์ฌ๋ฃ ์ถ์ฒ ๋ฐ์ดํฐ
์ ํ์ฌํญ
- ์๋ฃ์ง์ ์ ๋ฌธ์ ์ง๋จ์ ๋์ฒดํ์ง ์์ต๋๋ค
- ์ฐธ๊ณ ์ฉ ์ ๋ณด๋ก๋ง ํ์ฉํด์ฃผ์ธ์
- ์ฌ๊ฐํ ๊ฑด๊ฐ ๋ฌธ์ ๋ ๋ฐ๋์ ์๋ฃ์ง๊ณผ ์๋ดํ์ธ์
๊ฐ๋ฐ์
- Organization: carefood
- Contact: GitHub Issues
์ด ๋ชจ๋ธ์ ๊ฑด๊ฐ ๊ด๋ฆฌ ๋ณด์กฐ ๋ชฉ์ ์ผ๋ก๋ง ์ฌ์ฉํด์ฃผ์ธ์.
- Downloads last month
- 22
Hardware compatibility
Log In
to view the estimation
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for carefood/agent2-ingredient-selection-8b-gguf
Base model
meta-llama/Meta-Llama-3-8B
Finetuned
MLP-KTLim/llama-3-Korean-Bllossom-8B