Model Card for Finetuned FinBERT on Market-Based Facts

This LLM is fine-tuned on market reactions to events. By utilizing market-based data, it avoids human biases present in traditional annotation methods.

Our FinBERT model, finetuned on impactful news headlines about global equity markets, has shown significant performance improvements over standard models. Its training on real-world market impact rather than subjective financial expert opinions sets a new standard for unbiased financial sentiment analysis. πŸ“ˆ The dataset is uploaded on HuggingFace here.

Outperforms FinBERT

  • 🎯 +25% precision
  • πŸš€ +18% recall

Outperforms DistilRoBERTa finetuned for finance

  • 🎯 +22% precision
  • πŸš€ +15% recall

Outperforms GPT-4 zero-shot learning

  • 🎯 +15% precision
  • πŸš€ +8.2% recall

Validation Metrics

Metric Value
loss 0.9176467061042786
f1_macro 0.49749240436690023
f1_micro 0.5627105467737756
f1_weighted 0.5279720746084178
precision_macro 0.5386355574899088
precision_micro 0.5627105467737756
precision_weighted 0.5462149036191247
recall_macro 0.517542664344306
recall_micro 0.5627105467737756
recall_weighted 0.5627105467737756
accuracy 0.5627105467737756

This model has been developed after publishing in the Risk Forum 2024 conference a paper that can be found here (https://arxiv.org/abs/2401.05447).

Downloads last month
196
Safetensors
Model size
109M params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.