LongWriter-V-7B-DPO / README.md
nielsr's picture
nielsr HF Staff
Improve model card with details and pipeline tag
1f2bb2f verified
|
raw
history blame
3.39 kB
metadata
base_model: THU-KEG/LongWriter-V-7B
library_name: transformers
license: other
tags:
  - llama-factory
  - full
  - generated_from_trainer
model-index:
  - name: LongWriter-V-7B-DPO
    results: []
pipeline_tag: image-text-to-text

LongWriter-V-7B-DPO

This model is a fine-tuned version of THU-KEG/LongWriter-V-7B on the LongWriter-V-DPO dataset, designed for ultra-long and high-fidelity generation in vision-language models. It addresses challenges in generating long, coherent outputs while maintaining visual consistency with input images and text descriptions.

Model description

LongWriter-V-7B-DPO is a vision-language model fine-tuned for generating ultra-long and high-fidelity text outputs conditioned on both text and image inputs. This fine-tuning improves upon the base model's ability to generate coherent and contextually relevant responses even at extreme lengths, making it suitable for tasks requiring detailed and extensive descriptions based on visual and textual information.

Intended uses & limitations

This model is intended for tasks requiring long-form text generation based on image and text inputs. Potential applications include generating long lecture scripts based on presentation slides, crafting lengthy descriptions from images, and other tasks requiring extended and detailed textual outputs. The model's capabilities may be limited by the quality and relevance of the input image and text; the model is not designed for tasks requiring real-time data or up-to-date information.

Training and evaluation data

The model was fine-tuned on the LongWriter-V-DPO dataset. The evaluation benchmarks included MMLongBench-Write (focused on long output quality and length) and LongWrite-V-Ruler (a lightweight stress test of maximum output length). GPT-4o was used as the judge in the evaluation.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-06
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • total_eval_batch_size: 64
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3.0

Training results

[Link to training results or summary, if available]

Framework versions

  • Transformers 4.49.0.dev0
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0

Sample Usage

[Insert a concise code snippet demonstrating how to use the model for image-text-to-text generation]

Citation

@misc{tu2025longwriterv,
      title={LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models}, 
      author={Shangqing Tu and Yucheng Wang and Daniel Zhang-Li and Yushi Bai and Jifan Yu and Yuhao Wu and Lei Hou and Huiqin Liu and Zhiyuan Liu and Bin Xu and Juanzi Li},
      year={2025},
      eprint={2502.14834},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2502.14834}, 
}