FLaME / qa_table.html
mokamoto's picture
model tooltips on all pages
b61db1b
<link rel="stylesheet" href="static/css/tooltips.css">
<style>
.tooltip-right:hover::after {
left: auto \!important;
right: 100% \!important;
margin-left: 0 \!important;
margin-right: 10px \!important;
}
</style>
<!-- Question Answering -->
<div id="question-answering" class="tab-content">
<h2 class="title is-4">Question Answering Task Results</h2>
<div class="results-table">
<table class="table is-bordered is-striped is-narrow is-hoverable is-fullwidth">
<thead>
<tr>
<th rowspan="2">Model</th>
<th colspan="3" class="has-text-centered">Datasets (Accuracy)</th>
</tr>
<tr>
<th class="has-text-centered tooltip-trigger" data-title="FinQA" data-tooltip="FinQA contains 8,281 question-answer pairs derived from financial reports that require numerical reasoning over tabular financial data. The question-answering task features multi-step reasoning challenges with full annotation of reasoning programs to solve complex financial queries.">FinQA</th>
<th class="has-text-centered tooltip-trigger tooltip-right" data-title="ConvFinQA" data-tooltip="ConvFinQA is a multi-turn question answering dataset with 3,892 conversations containing 14,115 questions that explore chains of numerical reasoning in financial contexts. The conversational task requires maintaining context while performing sequential numerical operations to answer increasingly complex financial questions.">ConvFinQA</th>
<th class="has-text-centered tooltip-trigger tooltip-right" data-title="TATQA" data-tooltip="TATQA is a large-scale question answering dataset for hybrid data sources that combines tables and text from financial reports. The task emphasizes numerical reasoning operations across multiple formats, requiring models to integrate information from structured and unstructured sources to answer financial questions.">TATQA</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tooltip-trigger" data-title="Llama 3 70B Instruct" data-tooltip="Meta's advanced 70 billion parameter dense language model optimized for instruction-following tasks. Available through Together AI and notable for complex reasoning capabilities.">Llama 3 70B Instruct</td>
<td class="has-text-centered">0.809</td>
<td class="has-text-centered">0.709</td>
<td class="has-text-centered">0.772</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Llama 3 8B Instruct" data-tooltip="Meta's efficient 8 billion parameter language model optimized for instruction-following. Balances performance and efficiency for financial tasks with reasonable reasoning capabilities.">Llama 3 8B Instruct</td>
<td class="has-text-centered">0.767</td>
<td class="has-text-centered">0.268</td>
<td class="has-text-centered">0.706</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="DBRX Instruct" data-tooltip="Databricks' 132 billion parameter Mixture of Experts (MoE) model focused on advanced reasoning. Demonstrates competitive performance on financial tasks with strong text processing capabilities.">DBRX Instruct</td>
<td class="has-text-centered">0.738</td>
<td class="has-text-centered">0.252</td>
<td class="has-text-centered">0.633</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="DeepSeek LLM (67B)" data-tooltip="DeepSeek's 67 billion parameter model optimized for chat applications. Balances performance and efficiency across financial tasks with solid reasoning capabilities.">DeepSeek LLM (67B)</td>
<td class="has-text-centered">0.742</td>
<td class="has-text-centered">0.174</td>
<td class="has-text-centered">0.355</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Gemma 2 27B" data-tooltip="Google's open-weight 27 billion parameter model optimized for reasoning tasks. Balances performance and efficiency across financial domains with strong instruction-following.">Gemma 2 27B</td>
<td class="has-text-centered">0.768</td>
<td class="has-text-centered">0.268</td>
<td class="has-text-centered">0.734</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Gemma 2 9B" data-tooltip="Google's efficient open-weight 9 billion parameter model. Demonstrates good performance on financial tasks relative to its smaller size.">Gemma 2 9B</td>
<td class="has-text-centered">0.779</td>
<td class="has-text-centered">0.292</td>
<td class="has-text-centered">0.750</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Mistral (7B) Instruct v0.3" data-tooltip="Mistral AI's 7 billion parameter instruction-tuned model. Demonstrates impressive efficiency with reasonable performance on financial tasks despite its smaller size.">Mistral (7B) Instruct v0.3</td>
<td class="has-text-centered">0.655</td>
<td class="has-text-centered">0.199</td>
<td class="has-text-centered">0.553</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Mixtral-8x22B Instruct" data-tooltip="Mistral AI's 141 billion parameter MoE model with eight 22B expert networks. Features robust reasoning capabilities for financial tasks with strong instruction-following performance.">Mixtral-8x22B Instruct</td>
<td class="has-text-centered">0.766</td>
<td class="has-text-centered">0.285</td>
<td class="has-text-centered">0.666</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Mixtral-8x7B Instruct" data-tooltip="Mistral AI's 47 billion parameter MoE model with eight 7B expert networks. Balances efficiency and performance with reasonable financial reasoning capabilities.">Mixtral-8x7B Instruct</td>
<td class="has-text-centered">0.611</td>
<td class="has-text-centered">0.315</td>
<td class="has-text-centered">0.501</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Qwen 2 Instruct (72B)" data-tooltip="Alibaba's 72 billion parameter instruction-following model optimized for reasoning tasks. Features strong performance on financial domains with advanced text processing capabilities.">Qwen 2 Instruct (72B)</td>
<td class="has-text-centered">0.819</td>
<td class="has-text-centered">0.269</td>
<td class="has-text-centered">0.715</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="WizardLM-2 8x22B" data-tooltip="A 176 billion parameter MoE model focused on complex reasoning. Designed for advanced instruction-following with strong capabilities across financial tasks.">WizardLM-2 8x22B</td>
<td class="has-text-centered">0.796</td>
<td class="has-text-centered">0.247</td>
<td class="has-text-centered">0.725</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="DeepSeek-V3" data-tooltip="DeepSeek's 685 billion parameter Mixture of Experts (MoE) model optimized for advanced reasoning. Strong performance on financial tasks with robust instruction-following capabilities.">DeepSeek-V3</td>
<td class="has-text-centered performance-medium">0.840</td>
<td class="has-text-centered">0.261</td>
<td class="has-text-centered performance-low">0.779</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="DeepSeek R1" data-tooltip="DeepSeek's premium 671 billion parameter Mixture of Experts (MoE) model representing their most advanced offering. Designed for state-of-the-art performance across complex reasoning and financial tasks.">DeepSeek R1</td>
<td class="has-text-centered performance-low">0.836</td>
<td class="has-text-centered performance-best">0.853</td>
<td class="has-text-centered performance-best">0.858</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="QwQ-32B-Preview" data-tooltip="Qwen's experimental 32 billion parameter MoE model focused on efficient computation. Features interesting performance characteristics on certain financial tasks.">QwQ-32B-Preview</td>
<td class="has-text-centered">0.793</td>
<td class="has-text-centered">0.282</td>
<td class="has-text-centered performance-medium">0.796</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Jamba 1.5 Mini" data-tooltip="A compact variant in the Jamba model series focused on efficiency. Balances performance and computational requirements for financial tasks.">Jamba 1.5 Mini</td>
<td class="has-text-centered">0.666</td>
<td class="has-text-centered">0.218</td>
<td class="has-text-centered">0.586</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Jamba 1.5 Large" data-tooltip="An expanded variant in the Jamba model series with enhanced capabilities. Features stronger reasoning for financial tasks than its smaller counterpart.">Jamba 1.5 Large</td>
<td class="has-text-centered">0.790</td>
<td class="has-text-centered">0.225</td>
<td class="has-text-centered">0.660</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Claude 3.5 Sonnet" data-tooltip="Anthropic's advanced proprietary language model optimized for complex reasoning and instruction-following. Features enhanced performance on financial tasks with strong text processing capabilities.">Claude 3.5 Sonnet</td>
<td class="has-text-centered performance-best">0.844</td>
<td class="has-text-centered">0.402</td>
<td class="has-text-centered">0.700</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Claude 3 Haiku" data-tooltip="Anthropic's smaller efficiency-focused model in the Claude family. Designed for speed and lower computational requirements while maintaining reasonable performance on financial tasks.">Claude 3 Haiku</td>
<td class="has-text-centered">0.803</td>
<td class="has-text-centered">0.421</td>
<td class="has-text-centered">0.733</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Cohere Command R 7B" data-tooltip="Cohere's 7-billion parameter model focused on instruction-following. An efficient model with reasonable financial domain capabilities for its size.">Cohere Command R 7B</td>
<td class="has-text-centered">0.709</td>
<td class="has-text-centered">0.212</td>
<td class="has-text-centered">0.716</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Cohere Command R +" data-tooltip="Cohere's enhanced command model with improved instruction-following capabilities. Features advanced reasoning for financial domains with stronger performance than its smaller counterpart.">Cohere Command R +</td>
<td class="has-text-centered">0.776</td>
<td class="has-text-centered">0.259</td>
<td class="has-text-centered">0.698</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="Google Gemini 1.5 Pro" data-tooltip="Google's advanced proprietary multimodal model designed for complex reasoning and instruction-following tasks. Features strong performance across financial domains with advanced reasoning capabilities.">Google Gemini 1.5 Pro</td>
<td class="has-text-centered">0.829</td>
<td class="has-text-centered">0.280</td>
<td class="has-text-centered">0.763</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="OpenAI gpt-4o" data-tooltip="OpenAI's flagship multimodal model optimized for a balance of quality and speed. Features strong performance across diverse tasks with capabilities for complex financial reasoning and instruction following.">OpenAI gpt-4o</td>
<td class="has-text-centered performance-low">0.836</td>
<td class="has-text-centered performance-low">0.749</td>
<td class="has-text-centered">0.754</td>
</tr>
<tr>
<td class="tooltip-trigger" data-title="OpenAI o1-mini" data-tooltip="OpenAI's smaller advanced model balancing efficiency and performance. Demonstrates surprisingly strong results on financial tasks despite its reduced parameter count.">OpenAI o1-mini</td>
<td class="has-text-centered">0.799</td>
<td class="has-text-centered performance-medium">0.840</td>
<td class="has-text-centered">0.698</td>
</tr>
</tbody>
</table>
<div class="content is-small mt-4">
<p><strong>Note:</strong> Color highlighting indicates performance ranking:
<span class="performance-best">&nbsp;Best&nbsp;</span>,
<span class="performance-medium">&nbsp;Strong&nbsp;</span>,
<span class="performance-low">&nbsp;Good&nbsp;</span>
</p>
</div>
</div>
</div><script src="static/js/tooltips.js"></script>
<script src="static/js/fixed-tooltips.js"></script>