Datasets:

Modalities:
Image
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
syCen commited on
Commit
ce0bb81
·
verified ·
1 Parent(s): 9e0c202

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -3
README.md CHANGED
@@ -1,3 +1,73 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ <p align="center">
6
+ <img src="https://github.com/sy77777en/CameraBench/blob/main/images/CameraBench.png" width="600">
7
+ </p>
8
+
9
+ ## 📷 **CameraBench: Towards Understanding Camera Motions in Any Video**
10
+
11
+ [![](https://img.shields.io/badge/arXiv-2504.15376-b31b1b.svg?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2504.15376)
12
+ [![](https://img.shields.io/badge/%F0%9F%8F%A0%20_Homepage-4285F4?color=4285F4&logoColor=white)](https://linzhiqiu.github.io/papers/camerabench/)
13
+ [![](https://img.shields.io/badge/%F0%9F%A4%97%20_CameraBench_testset-FF9B00?color=FF9B00&logoColor=white)](https://huggingface.co/datasets/syCen/CameraBench)
14
+
15
+ ![Demo GIF](./images/sfm_vs_vlm.jpg)
16
+ > **SfMs and VLMs performance on CameraBench**: Generative VLMs (evaluated with [VQAScore](https://linzhiqiu.github.io/papers/vqascore/)) trail classical SfM/SLAM in pure geometry, yet they outperform discriminative VLMs that rely on CLIPScore/ITMScore and—even better—capture scene‑aware semantic cues missed by SfM
17
+ > After simple supervised fine‑tuning (SFT) on ≈1,400 extra annotated clips, our 7B Qwen2.5‑VL doubles its AP, outperforming the current best MegaSAM.
18
+
19
+ ## 📰 News
20
+ - **[2025/04/26]🔥** We open‑sourced our **fine‑tuned 7B model** and the public **test set**—1 000+ videos with expert labels & captions..
21
+ - **LLMs‑eval** integration is in progress—stay tuned!
22
+ - 32B & 72B checkpoints are on the way.
23
+
24
+ ## 🌍 Explore More
25
+ - [🤗**CameraBench Testset**](https://huggingface.co/datasets/syCen/CameraBench): Download the testset.
26
+ - [🚀**Fine-tuned Model**](): Access model checkpoints.
27
+ - [🏠**Home Page**](https://linzhiqiu.github.io/papers/camerabench/): Demos & docs.
28
+ - [📖**Paper**](https://arxiv.org/abs/2504.15376): Detailed information about CameraBench.
29
+ - [📈**Leaderboard**](https://sy77777en.github.io/CameraBench/leaderboard/table.html): Explore the full leaderboard..
30
+
31
+
32
+ ## 🔎 VQA evaluation on VLMs
33
+
34
+ <table>
35
+ <tr>
36
+ <td>
37
+ <div style="display: flex; flex-direction: column; gap: 1em;">
38
+ <img src="./images/VQA-Leaderboard.png" width="440">
39
+ </div>
40
+ </td>
41
+ <td>
42
+ <div style="display: flex; flex-direction: column; gap: 1em;">
43
+ <div>
44
+ <img src="./images/8-1.gif" width="405"><br>
45
+ 🤔: Does the camera track the subject from a side view? <br>
46
+ 🤖: ✅ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 🙋: ✅
47
+ </div>
48
+ <div>
49
+ <img src="./images/8-2.gif" width="405"><br>
50
+ 🤔: Does the camera only move down during the video? <br>
51
+ 🤖: ❌ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 🙋: ✅
52
+ </div>
53
+ <div>
54
+ <img src="./images/8-3.gif" width="405"><br>
55
+ 🤔: Does the camera move backward while zooming in? <br>
56
+ 🤖: ❌ &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 🙋: ✅
57
+ </div>
58
+ </div>
59
+ </td>
60
+ </tr>
61
+ </table>
62
+
63
+ ## ✏️ Citation
64
+
65
+ If you find this repository useful for your research, please use the following.
66
+ ```
67
+ @article{lin2025towards,
68
+ title={Towards Understanding Camera Motions in Any Video},
69
+ author={Lin, Zhiqiu and Cen, Siyuan and Jiang, Daniel and Karhade, Jay and Wang, Hewei and Mitra, Chancharik and Ling, Tiffany and Huang, Yuhan and Liu, Sifan and Chen, Mingyu and Zawar, Rushikesh and Bai, Xue and Du, Yilun and Gan, Chuang and Ramanan, Deva},
70
+ journal={arXiv preprint arXiv:2504.15376},
71
+ year={2025},
72
+ }
73
+ ```