FinCreditPhi-3.5-mini
๋ชจ๋ธ ๊ฐ์
FinCreditPhi-3.5-mini๋ ๊ธ์ต ์ ์ฉ ํ๊ฐ๋ฅผ ์ํด ํน๋ณํ ์ค๊ณ๋ ํ๊ตญ์ด ์ธ์ด ๋ชจ๋ธ์ ๋๋ค.
๋ฒ ์ด์ค ๋ชจ๋ธ: unsloth/Phi-3.5-mini-instruct ๋ฐ์ดํฐ์ : himedia/financial_dummy_data_v4 ํ์ต ๋ฐฉ๋ฒ: LoRA (Low-Rank Adaptation) ํ์ต ์ผ์: 20250622_131709
๐ ํ์ต ๊ฒฐ๊ณผ
- Final Training Loss: 0.1521
- Final Validation Loss: 0.1550
- Best Validation Loss: 0.1550 (step 1000)
- Overall Improvement: 87.0%
- Training Time: 73.66 minutes
ํ์ดํผํ๋ผ๋ฏธํฐ
- Learning Rate: 0.0002
- Max Steps: 1000
- Batch Size: 4
- Gradient Accumulation: 4
- LoRA r: 32
- LoRA alpha: 32
- Max Sequence Length: 2048
- Warmup Steps: 5
๐ง ๋ฉ๋ชจ๋ฆฌ ์ฌ์ฉ๋
- GPU: NVIDIA RTX A5000
- Peak Memory: 6.381 GB
- Memory Usage: 27.1%
์ฌ์ฉ ๋ฐฉ๋ฒ
from transformers import AutoTokenizer, AutoModelForCausalLM
# ๋ชจ๋ธ๊ณผ ํ ํฌ๋์ด์ ๋ก๋
tokenizer = AutoTokenizer.from_pretrained("himedia/fincredit-Phi-3.5-mini-lr2e04-bs16-r32-steps1000-20250622_131709")
model = AutoModelForCausalLM.from_pretrained("himedia/fincredit-Phi-3.5-mini-lr2e04-bs16-r32-steps1000-20250622_131709")
# ๊ฐ๋จํ ์ถ๋ก ์์
prompt = "๊ณ ๊ฐ์ ์ ์ฉ๋ฑ๊ธ์ ํ๊ฐํด์ฃผ์ธ์:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
๐ ํ์ต ๋ฐ์ดํฐ ํ์ผ
์ด ๋ ํฌ์งํ ๋ฆฌ์๋ ๋ค์ ํ์ต ๊ด๋ จ ํ์ผ๋ค์ด ํฌํจ๋์ด ์์ต๋๋ค:
training_log.json
: ์ ์ฒด ํ์ต ๋ก๊ทธ (JSON ํ์)FinCreditPhi-3.5-mini_20250622_131709_training_curves.png
: ํ์ต ๊ณก์ ์๊ฐํ ์ด๋ฏธ์ง
๋ ํฌ์งํ ๋ฆฌ๋ช ๊ตฌ์ฑ
fincredit-Phi-3.5-mini-lr2e04-bs16-r32-steps1000-20250622_131709 = fincredit-lamma3-4b-lr2e04-bs4-r32-steps1000-20250622_131709
fincredit-lamma3-4b
: ๋ชจ๋ธ ๊ธฐ๋ณธ๋ชlr2e04
: Learning Ratebs4
: Batch Sizer32
: LoRA ranksteps1000
: ํ์ต ์คํ 20250622_131709
: ํ์ต ์๊ฐ
์ฑ๋ฅ
์ด ๋ชจ๋ธ์ ํ๊ตญ์ด ๊ธ์ต ํ ์คํธ์ ๋ํด ํ์ธํ๋๋์ด ์ ์ฉ ํ๊ฐ ๊ด๋ จ ์ง์์๋ต์ ํนํ๋์ด ์์ต๋๋ค.
๋ผ์ด์ ์ค
Apache 2.0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for himedia/fincredit-Phi-3.5-mini-lr2e04-bs16-r32-steps1000-20250622_131709
Base model
microsoft/Phi-3.5-mini-instruct
Finetuned
unsloth/Phi-3.5-mini-instruct