Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
Update logos number 5
Browse files- src/about.py +4 -4
src/about.py
CHANGED
|
@@ -75,13 +75,13 @@ TITLE = """<h1 align="center" id="space-title">🇯🇵Open Japanese LLM Leaderb
|
|
| 75 |
BOTTOM_LOGO = """
|
| 76 |
<div style="display: flex; flex-direction: row; justify-content: space-around; align-items: center" dir="ltr">
|
| 77 |
<a href="https://llm-jp.nii.ac.jp/en/">
|
| 78 |
-
<img src="file/Logos-HQ/LLM-jp-Logo-Oct-2024.png" alt="LLM-jp" style="max-height: 100px">
|
| 79 |
</a>
|
| 80 |
<a href="https://mdx.jp/">
|
| 81 |
-
<img src="file/Logos-HQ/MDX-Logo-Oct-2024.jpg" alt="MDX platform" style="max-height: 100px">
|
| 82 |
</a>
|
| 83 |
<a href="https://huggingface.co/">
|
| 84 |
-
<img src="file/Logos-HQ/HuggingFace-Logo-Oct-2024.png" alt="Hugging Face" style="max-height: 100px">
|
| 85 |
</a>
|
| 86 |
</div>
|
| 87 |
</div>
|
|
@@ -97,7 +97,7 @@ On the __"LLM Benchmark"__ page, the question mark **"?"** refers to the paramet
|
|
| 97 |
# Which evaluations are you running? how can people reproduce what you have?
|
| 98 |
LLM_BENCHMARKS_TEXT = f"""
|
| 99 |
## How it works
|
| 100 |
-
📈 We evaluate Japanese Large Language Models on
|
| 101 |
|
| 102 |
**NLI (Natural Language Inference)**
|
| 103 |
|
|
|
|
| 75 |
BOTTOM_LOGO = """
|
| 76 |
<div style="display: flex; flex-direction: row; justify-content: space-around; align-items: center" dir="ltr">
|
| 77 |
<a href="https://llm-jp.nii.ac.jp/en/">
|
| 78 |
+
<img src="file/src/Logos-HQ/LLM-jp-Logo-Oct-2024.png" alt="LLM-jp" style="max-height: 100px">
|
| 79 |
</a>
|
| 80 |
<a href="https://mdx.jp/">
|
| 81 |
+
<img src="file/src/Logos-HQ/MDX-Logo-Oct-2024.jpg" alt="MDX platform" style="max-height: 100px">
|
| 82 |
</a>
|
| 83 |
<a href="https://huggingface.co/">
|
| 84 |
+
<img src="file/src/Logos-HQ/HuggingFace-Logo-Oct-2024.png" alt="Hugging Face" style="max-height: 100px">
|
| 85 |
</a>
|
| 86 |
</div>
|
| 87 |
</div>
|
|
|
|
| 97 |
# Which evaluations are you running? how can people reproduce what you have?
|
| 98 |
LLM_BENCHMARKS_TEXT = f"""
|
| 99 |
## How it works
|
| 100 |
+
📈 We evaluate Japanese Large Language Models on 16 tasks leveraging our evaluation tool [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval), a unified framework to evaluate Japanese LLMs on various evaluation tasks.
|
| 101 |
|
| 102 |
**NLI (Natural Language Inference)**
|
| 103 |
|