FLaME / sentiment_analysis_table.html
mokamoto's picture
model tooltips on all pages
b61db1b
<link rel="stylesheet" href="static/css/tooltips.css">
<style>
.tooltip-right:hover::after {
left: auto \!important;
right: 100% \!important;
margin-left: 0 \!important;
margin-right: 10px \!important;
}
</style>
<!-- Sentiment Analysis -->
<div id="sentiment-analysis" class="tab-content">
<h2 class="title is-4">Sentiment Analysis Task Results</h2>
<div class="results-table">
<table class="table is-bordered is-striped is-narrow is-hoverable is-fullwidth">
<thead>
<tr>
<th rowspan="2">Model</th>
<th colspan="3" class="has-text-centered tooltip-trigger" data-title="FiQA Task 1" data-tooltip="FiQA Task 1 focuses on aspect-based financial sentiment analysis in microblog posts and news headlines using a continuous scale from -1 (negative) to 1 (positive). The regression task requires models to accurately predict the sentiment score that reflects investor perception of financial texts.">FiQA Task 1</th>
<th colspan="4" class="has-text-centered tooltip-trigger" data-title="Financial Phrase Bank" data-tooltip="Financial Phrase Bank (FPB) contains 4,840 sentences from financial news articles categorized as positive, negative, or neutral by 16 finance experts using majority voting. The sentiment classification task requires understanding how these statements might influence investor perception of stock prices.">Financial Phrase Bank (FPB)</th>
<th colspan="4" class="has-text-centered tooltip-trigger tooltip-right" style="position: relative;" data-title="SubjECTive-QA" data-tooltip="SubjECTive-QA contains 49,446 annotations across 2,747 question-answer pairs extracted from 120 earnings call transcripts. The multi-label classification task involves analyzing six subjective features in financial discourse: assertiveness, cautiousness, optimism, specificity, clarity, and relevance.">SubjECTive-QA</th>
</tr>
<tr>
<th class="has-text-centered">MSE</th>
<th class="has-text-centered">MAE</th>
<th class="has-text-centered">r² Score</th>
<th class="has-text-centered">Accuracy</th>
<th class="has-text-centered">Precision</th>
<th class="has-text-centered">Recall</th>
<th class="has-text-centered">F1</th>
<th class="has-text-centered">Precision</th>
<th class="has-text-centered">Recall</th>
<th class="has-text-centered">F1</th>
<th class="has-text-centered">Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tooltip-trigger" data-title="Llama 3 70B Instruct" data-tooltip="Meta's advanced 70 billion parameter dense language model optimized for instruction-following tasks. Available through Together AI and notable for complex reasoning capabilities.">Llama 3 70B Instruct</td>
<td class="has-text-centered">0.123</td>
<td class="has-text-centered">0.290</td>
<td class="has-text-centered">0.272</td>
<td class="has-text-centered">0.901</td>
<td class="has-text-centered">0.904</td>
<td class="has-text-centered">0.901</td>
<td class="has-text-centered">0.902</td>
<td class="has-text-centered">0.652</td>
<td class="has-text-centered">0.573</td>
<td class="has-text-centered">0.535</td>
<td class="has-text-centered">0.573</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Llama 3 8B Instruct" data-tooltip="Meta's efficient 8 billion parameter language model optimized for instruction-following. Balances performance and efficiency for financial tasks with reasonable reasoning capabilities.">Llama 3 8B Instruct</td>
<td class="has-text-centered">0.161</td>
<td class="has-text-centered">0.344</td>
<td class="has-text-centered">0.045</td>
<td class="has-text-centered">0.738</td>
<td class="has-text-centered">0.801</td>
<td class="has-text-centered">0.738</td>
<td class="has-text-centered">0.698</td>
<td class="has-text-centered">0.635</td>
<td class="has-text-centered">0.625</td>
<td class="has-text-centered performance-best">0.600</td>
<td class="has-text-centered">0.625</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="DBRX Instruct" data-tooltip="Databricks' 132 billion parameter Mixture of Experts (MoE) model focused on advanced reasoning. Demonstrates competitive performance on financial tasks with strong text processing capabilities.">DBRX Instruct</td>
<td class="has-text-centered">0.160</td>
<td class="has-text-centered">0.321</td>
<td class="has-text-centered">0.052</td>
<td class="has-text-centered">0.524</td>
<td class="has-text-centered">0.727</td>
<td class="has-text-centered">0.524</td>
<td class="has-text-centered">0.499</td>
<td class="has-text-centered">0.654</td>
<td class="has-text-centered">0.541</td>
<td class="has-text-centered">0.436</td>
<td class="has-text-centered">0.541</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="DeepSeek LLM (67B)" data-tooltip="DeepSeek's 67 billion parameter model optimized for chat applications. Balances performance and efficiency across financial tasks with solid reasoning capabilities.">DeepSeek LLM (67B)</td>
<td class="has-text-centered">0.118</td>
<td class="has-text-centered">0.278</td>
<td class="has-text-centered">0.302</td>
<td class="has-text-centered">0.815</td>
<td class="has-text-centered">0.867</td>
<td class="has-text-centered">0.815</td>
<td class="has-text-centered">0.811</td>
<td class="has-text-centered">0.676</td>
<td class="has-text-centered">0.544</td>
<td class="has-text-centered">0.462</td>
<td class="has-text-centered">0.544</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Gemma 2 27B" data-tooltip="Google's open-weight 27 billion parameter model optimized for reasoning tasks. Balances performance and efficiency across financial domains with strong instruction-following.">Gemma 2 27B</td>
<td class="has-text-centered performance-best">0.100</td>
<td class="has-text-centered performance-best">0.266</td>
<td class="has-text-centered">0.406</td>
<td class="has-text-centered">0.890</td>
<td class="has-text-centered">0.896</td>
<td class="has-text-centered">0.890</td>
<td class="has-text-centered">0.884</td>
<td class="has-text-centered">0.562</td>
<td class="has-text-centered">0.524</td>
<td class="has-text-centered">0.515</td>
<td class="has-text-centered">0.524</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Gemma 2 9B" data-tooltip="Google's efficient open-weight 9 billion parameter model. Demonstrates good performance on financial tasks relative to its smaller size.">Gemma 2 9B</td>
<td class="has-text-centered">0.189</td>
<td class="has-text-centered">0.352</td>
<td class="has-text-centered">-0.120</td>
<td class="has-text-centered performance-strong">0.940</td>
<td class="has-text-centered performance-strong">0.941</td>
<td class="has-text-centered performance-strong">0.940</td>
<td class="has-text-centered performance-strong">0.940</td>
<td class="has-text-centered">0.570</td>
<td class="has-text-centered">0.499</td>
<td class="has-text-centered">0.491</td>
<td class="has-text-centered">0.499</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Mistral (7B) Instruct v0.3" data-tooltip="Mistral AI's 7 billion parameter instruction-tuned model. Demonstrates impressive efficiency with reasonable performance on financial tasks despite its smaller size.">Mistral (7B) Instruct v0.3</td>
<td class="has-text-centered">0.135</td>
<td class="has-text-centered">0.278</td>
<td class="has-text-centered">0.200</td>
<td class="has-text-centered">0.847</td>
<td class="has-text-centered">0.854</td>
<td class="has-text-centered">0.847</td>
<td class="has-text-centered">0.841</td>
<td class="has-text-centered">0.607</td>
<td class="has-text-centered">0.542</td>
<td class="has-text-centered">0.522</td>
<td class="has-text-centered">0.542</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Mixtral-8x22B Instruct" data-tooltip="Mistral AI's 141 billion parameter MoE model with eight 22B expert networks. Features robust reasoning capabilities for financial tasks with strong instruction-following performance.">Mixtral-8x22B Instruct</td>
<td class="has-text-centered">0.221</td>
<td class="has-text-centered">0.364</td>
<td class="has-text-centered">-0.310</td>
<td class="has-text-centered">0.768</td>
<td class="has-text-centered">0.845</td>
<td class="has-text-centered">0.768</td>
<td class="has-text-centered">0.776</td>
<td class="has-text-centered">0.614</td>
<td class="has-text-centered">0.538</td>
<td class="has-text-centered">0.510</td>
<td class="has-text-centered">0.538</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Mixtral-8x7B Instruct" data-tooltip="Mistral AI's 47 billion parameter MoE model with eight 7B expert networks. Balances efficiency and performance with reasonable financial reasoning capabilities.">Mixtral-8x7B Instruct</td>
<td class="has-text-centered">0.208</td>
<td class="has-text-centered">0.307</td>
<td class="has-text-centered">-0.229</td>
<td class="has-text-centered">0.896</td>
<td class="has-text-centered">0.898</td>
<td class="has-text-centered">0.896</td>
<td class="has-text-centered">0.893</td>
<td class="has-text-centered">0.611</td>
<td class="has-text-centered">0.518</td>
<td class="has-text-centered">0.498</td>
<td class="has-text-centered">0.518</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Qwen 2 Instruct (72B)" data-tooltip="Alibaba's 72 billion parameter instruction-following model optimized for reasoning tasks. Features strong performance on financial domains with advanced text processing capabilities.">Qwen 2 Instruct (72B)</td>
<td class="has-text-centered">0.205</td>
<td class="has-text-centered">0.409</td>
<td class="has-text-centered">-0.212</td>
<td class="has-text-centered">0.904</td>
<td class="has-text-centered">0.908</td>
<td class="has-text-centered">0.904</td>
<td class="has-text-centered">0.901</td>
<td class="has-text-centered">0.644</td>
<td class="has-text-centered">0.601</td>
<td class="has-text-centered">0.576</td>
<td class="has-text-centered">0.601</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="WizardLM-2 8x22B" data-tooltip="A 176 billion parameter MoE model focused on complex reasoning. Designed for advanced instruction-following with strong capabilities across financial tasks.">WizardLM-2 8x22B</td>
<td class="has-text-centered">0.129</td>
<td class="has-text-centered">0.283</td>
<td class="has-text-centered">0.239</td>
<td class="has-text-centered">0.765</td>
<td class="has-text-centered">0.853</td>
<td class="has-text-centered">0.765</td>
<td class="has-text-centered">0.779</td>
<td class="has-text-centered">0.611</td>
<td class="has-text-centered">0.570</td>
<td class="has-text-centered">0.566</td>
<td class="has-text-centered">0.570</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="DeepSeek-V3" data-tooltip="DeepSeek's 685 billion parameter Mixture of Experts (MoE) model optimized for advanced reasoning. Strong performance on financial tasks with robust instruction-following capabilities.">DeepSeek-V3</td>
<td class="has-text-centered">0.150</td>
<td class="has-text-centered">0.311</td>
<td class="has-text-centered">0.111</td>
<td class="has-text-centered">0.828</td>
<td class="has-text-centered">0.851</td>
<td class="has-text-centered">0.828</td>
<td class="has-text-centered">0.814</td>
<td class="has-text-centered">0.640</td>
<td class="has-text-centered">0.572</td>
<td class="has-text-centered performance-medium">0.583</td>
<td class="has-text-centered">0.572</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="DeepSeek R1" data-tooltip="DeepSeek's premium 671 billion parameter Mixture of Experts (MoE) model representing their most advanced offering. Designed for state-of-the-art performance across complex reasoning and financial tasks.">DeepSeek R1</td>
<td class="has-text-centered performance-low">0.110</td>
<td class="has-text-centered">0.289</td>
<td class="has-text-centered">0.348</td>
<td class="has-text-centered">0.904</td>
<td class="has-text-centered">0.907</td>
<td class="has-text-centered">0.904</td>
<td class="has-text-centered">0.902</td>
<td class="has-text-centered">0.644</td>
<td class="has-text-centered">0.489</td>
<td class="has-text-centered">0.499</td>
<td class="has-text-centered">0.489</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="QwQ-32B-Preview" data-tooltip="Qwen's experimental 32 billion parameter MoE model focused on efficient computation. Features interesting performance characteristics on certain financial tasks.">QwQ-32B-Preview</td>
<td class="has-text-centered">0.141</td>
<td class="has-text-centered">0.290</td>
<td class="has-text-centered">0.165</td>
<td class="has-text-centered">0.812</td>
<td class="has-text-centered">0.827</td>
<td class="has-text-centered">0.812</td>
<td class="has-text-centered">0.815</td>
<td class="has-text-centered">0.629</td>
<td class="has-text-centered">0.534</td>
<td class="has-text-centered">0.550</td>
<td class="has-text-centered">0.534</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Jamba 1.5 Mini" data-tooltip="A compact variant in the Jamba model series focused on efficiency. Balances performance and computational requirements for financial tasks.">Jamba 1.5 Mini</td>
<td class="has-text-centered performance-low">0.119</td>
<td class="has-text-centered">0.282</td>
<td class="has-text-centered">0.293</td>
<td class="has-text-centered">0.784</td>
<td class="has-text-centered">0.814</td>
<td class="has-text-centered">0.784</td>
<td class="has-text-centered">0.765</td>
<td class="has-text-centered">0.380</td>
<td class="has-text-centered">0.525</td>
<td class="has-text-centered">0.418</td>
<td class="has-text-centered">0.525</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Jamba 1.5 Large" data-tooltip="An expanded variant in the Jamba model series with enhanced capabilities. Features stronger reasoning for financial tasks than its smaller counterpart.">Jamba 1.5 Large</td>
<td class="has-text-centered">0.183</td>
<td class="has-text-centered">0.363</td>
<td class="has-text-centered">-0.085</td>
<td class="has-text-centered">0.824</td>
<td class="has-text-centered">0.850</td>
<td class="has-text-centered">0.824</td>
<td class="has-text-centered">0.798</td>
<td class="has-text-centered">0.635</td>
<td class="has-text-centered">0.573</td>
<td class="has-text-centered performance-medium">0.582</td>
<td class="has-text-centered">0.573</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Claude 3.5 Sonnet" data-tooltip="Anthropic's advanced proprietary language model optimized for complex reasoning and instruction-following. Features enhanced performance on financial tasks with strong text processing capabilities.">Claude 3.5 Sonnet</td>
<td class="has-text-centered performance-low">0.101</td>
<td class="has-text-centered performance-low">0.268</td>
<td class="has-text-centered performance-best">0.402</td>
<td class="has-text-centered performance-best">0.944</td>
<td class="has-text-centered performance-best">0.945</td>
<td class="has-text-centered performance-best">0.944</td>
<td class="has-text-centered performance-best">0.944</td>
<td class="has-text-centered">0.634</td>
<td class="has-text-centered performance-medium">0.585</td>
<td class="has-text-centered">0.553</td>
<td class="has-text-centered performance-medium">0.585</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Claude 3 Haiku" data-tooltip="Anthropic's smaller efficiency-focused model in the Claude family. Designed for speed and lower computational requirements while maintaining reasonable performance on financial tasks.">Claude 3 Haiku</td>
<td class="has-text-centered">0.167</td>
<td class="has-text-centered">0.349</td>
<td class="has-text-centered">0.008</td>
<td class="has-text-centered">0.907</td>
<td class="has-text-centered">0.913</td>
<td class="has-text-centered">0.907</td>
<td class="has-text-centered">0.908</td>
<td class="has-text-centered">0.619</td>
<td class="has-text-centered">0.538</td>
<td class="has-text-centered">0.463</td>
<td class="has-text-centered">0.538</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Cohere Command R 7B" data-tooltip="Cohere's 7-billion parameter model focused on instruction-following. An efficient model with reasonable financial domain capabilities for its size.">Cohere Command R 7B</td>
<td class="has-text-centered">0.164</td>
<td class="has-text-centered">0.319</td>
<td class="has-text-centered">0.028</td>
<td class="has-text-centered">0.835</td>
<td class="has-text-centered">0.861</td>
<td class="has-text-centered">0.835</td>
<td class="has-text-centered">0.840</td>
<td class="has-text-centered">0.609</td>
<td class="has-text-centered">0.547</td>
<td class="has-text-centered">0.532</td>
<td class="has-text-centered">0.547</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Cohere Command R +" data-tooltip="Cohere's enhanced command model with improved instruction-following capabilities. Features advanced reasoning for financial domains with stronger performance than its smaller counterpart.">Cohere Command R +</td>
<td class="has-text-centered performance-medium">0.106</td>
<td class="has-text-centered">0.274</td>
<td class="has-text-centered performance-medium">0.373</td>
<td class="has-text-centered">0.741</td>
<td class="has-text-centered">0.806</td>
<td class="has-text-centered">0.741</td>
<td class="has-text-centered">0.699</td>
<td class="has-text-centered">0.608</td>
<td class="has-text-centered">0.547</td>
<td class="has-text-centered">0.533</td>
<td class="has-text-centered">0.547</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Google Gemini 1.5 Pro" data-tooltip="Google's advanced proprietary multimodal model designed for complex reasoning and instruction-following tasks. Features strong performance across financial domains with advanced reasoning capabilities.">Google Gemini 1.5 Pro</td>
<td class="has-text-centered">0.144</td>
<td class="has-text-centered">0.329</td>
<td class="has-text-centered">0.149</td>
<td class="has-text-centered">0.890</td>
<td class="has-text-centered">0.895</td>
<td class="has-text-centered">0.890</td>
<td class="has-text-centered">0.885</td>
<td class="has-text-centered">0.642</td>
<td class="has-text-centered performance-medium">0.587</td>
<td class="has-text-centered performance-best">0.593</td>
<td class="has-text-centered performance-best">0.587</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="OpenAI gpt-4o" data-tooltip="OpenAI's flagship multimodal model optimized for a balance of quality and speed. Features strong performance across diverse tasks with capabilities for complex financial reasoning and instruction following.">OpenAI gpt-4o</td>
<td class="has-text-centered">0.184</td>
<td class="has-text-centered">0.317</td>
<td class="has-text-centered">-0.089</td>
<td class="has-text-centered">0.929</td>
<td class="has-text-centered">0.931</td>
<td class="has-text-centered">0.929</td>
<td class="has-text-centered">0.928</td>
<td class="has-text-centered">0.639</td>
<td class="has-text-centered">0.515</td>
<td class="has-text-centered">0.541</td>
<td class="has-text-centered">0.515</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="OpenAI o1-mini" data-tooltip="OpenAI's smaller advanced model balancing efficiency and performance. Demonstrates surprisingly strong results on financial tasks despite its reduced parameter count.">OpenAI o1-mini</td>
<td class="has-text-centered performance-medium">0.120</td>
<td class="has-text-centered">0.295</td>
<td class="has-text-centered">0.289</td>
<td class="has-text-centered">0.918</td>
<td class="has-text-centered">0.917</td>
<td class="has-text-centered">0.918</td>
<td class="has-text-centered">0.917</td>
<td class="has-text-centered performance-best">0.660</td>
<td class="has-text-centered">0.515</td>
<td class="has-text-centered">0.542</td>
<td class="has-text-centered">0.515</td>
</tr>
</tbody>
</table>
<div class="content is-small mt-4">
<p><strong>Note:</strong> Color highlighting indicates performance ranking:
<span class="performance-best">&nbsp;Best&nbsp;</span>,
<span class="performance-medium">&nbsp;Strong&nbsp;</span>,
<span class="performance-low">&nbsp;Good&nbsp;</span>
</p>
</div>
</div>
</div><script src="static/js/tooltips.js"></script>
<script src="static/js/fixed-tooltips.js"></script>