Improve model card with details and pipeline tag

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +26 -6
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
 
2
  library_name: transformers
3
  license: other
4
- base_model: THU-KEG/LongWriter-V-7B
5
  tags:
6
  - llama-factory
7
  - full
@@ -9,6 +9,7 @@ tags:
9
  model-index:
10
  - name: LongWriter-V-7B-DPO
11
  results: []
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,19 +17,20 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # LongWriter-V-7B-DPO
18
 
19
- This model is a fine-tuned version of [THU-KEG/LongWriter-V-7B](https://huggingface.co/THU-KEG/LongWriter-V-7B) on the LongWriter-V-DPO dataset.
 
20
 
21
  ## Model description
22
 
23
- More information needed
24
 
25
  ## Intended uses & limitations
26
 
27
- More information needed
28
 
29
  ## Training and evaluation data
30
 
31
- More information needed
32
 
33
  ## Training procedure
34
 
@@ -51,7 +53,7 @@ The following hyperparameters were used during training:
51
 
52
  ### Training results
53
 
54
-
55
 
56
  ### Framework versions
57
 
@@ -59,3 +61,21 @@ The following hyperparameters were used during training:
59
  - Pytorch 2.5.1+cu124
60
  - Datasets 3.2.0
61
  - Tokenizers 0.21.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: THU-KEG/LongWriter-V-7B
3
  library_name: transformers
4
  license: other
 
5
  tags:
6
  - llama-factory
7
  - full
 
9
  model-index:
10
  - name: LongWriter-V-7B-DPO
11
  results: []
12
+ pipeline_tag: image-text-to-text
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  # LongWriter-V-7B-DPO
19
 
20
+ This model is a fine-tuned version of [THU-KEG/LongWriter-V-7B](https://huggingface.co/THU-KEG/LongWriter-V-7B) on the LongWriter-V-DPO dataset, designed for ultra-long and high-fidelity generation in vision-language models. It addresses challenges in generating long, coherent outputs while maintaining visual consistency with input images and text descriptions.
21
+
22
 
23
  ## Model description
24
 
25
+ LongWriter-V-7B-DPO is a vision-language model fine-tuned for generating ultra-long and high-fidelity text outputs conditioned on both text and image inputs. This fine-tuning improves upon the base model's ability to generate coherent and contextually relevant responses even at extreme lengths, making it suitable for tasks requiring detailed and extensive descriptions based on visual and textual information.
26
 
27
  ## Intended uses & limitations
28
 
29
+ This model is intended for tasks requiring long-form text generation based on image and text inputs. Potential applications include generating long lecture scripts based on presentation slides, crafting lengthy descriptions from images, and other tasks requiring extended and detailed textual outputs. The model's capabilities may be limited by the quality and relevance of the input image and text; the model is not designed for tasks requiring real-time data or up-to-date information.
30
 
31
  ## Training and evaluation data
32
 
33
+ The model was fine-tuned on the LongWriter-V-DPO dataset. The evaluation benchmarks included MMLongBench-Write (focused on long output quality and length) and LongWrite-V-Ruler (a lightweight stress test of maximum output length). GPT-4o was used as the judge in the evaluation.
34
 
35
  ## Training procedure
36
 
 
53
 
54
  ### Training results
55
 
56
+ [Link to training results or summary, if available]
57
 
58
  ### Framework versions
59
 
 
61
  - Pytorch 2.5.1+cu124
62
  - Datasets 3.2.0
63
  - Tokenizers 0.21.0
64
+
65
+ ## Sample Usage
66
+
67
+ [Insert a concise code snippet demonstrating how to use the model for image-text-to-text generation]
68
+
69
+ ## Citation
70
+
71
+ ```
72
+ @misc{tu2025longwriterv,
73
+ title={LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models},
74
+ author={Shangqing Tu and Yucheng Wang and Daniel Zhang-Li and Yushi Bai and Jifan Yu and Yuhao Wu and Lei Hou and Huiqin Liu and Zhiyuan Liu and Bin Xu and Juanzi Li},
75
+ year={2025},
76
+ eprint={2502.14834},
77
+ archivePrefix={arXiv},
78
+ primaryClass={cs.CV},
79
+ url={https://arxiv.org/abs/2502.14834},
80
+ }
81
+ ```