Model Card for Llama-3.1-8B Fine-Tuned for Financial Sentiment Analysis
This model is a fine-tuned version of Meta's Llama-3.1-8B, tailored for financial sentiment analysis tasks. It leverages LoRA and 8-bit quantization techniques to achieve efficient performance while reducing computational overhead.
Model Details
Model Description
- Model type: Causal Language Model fine-tuned for financial sentiment analysis
- Language(s): English
- Finetuned from model: meta-llama/Llama-3.1-8B
Direct Use
The model can be directly used for financial sentiment analysis tasks, including: - Analyzing financial news sentiment - Sentiment classification on financial social media data
How to Get Started with the Model
Use the following code to load the model:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Base and fine-tuned model
base_model = "meta-llama/Llama-3.1-8B"
peft_model = "llk010502/llama3.1-8B-financial_sentiment"
# Load the base model
model = AutoModelForCausalLM.from_pretrained(
base_model,
trust_remote_code=True,
device_map="auto"
)
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
# Load the fine-tuned model
model = PeftModel.from_pretrained(model, peft_model)
model = model.eval()
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for llk010502/llama3.1-8B-financial_sentiment
Base model
meta-llama/Llama-3.1-8B