Inference problems for all Qwen2.5 VL models in transformers above 4.49.0
1
#26 opened 4 days ago
by
mirekphd
Ask about the M-RoPE
1
#25 opened 7 days ago
by
JavenChen
Upload IMG-20250318-WA0007.jpg
#23 opened 11 days ago
by
Aceyung1

'Qwen2_5_VLProcessor' object has no attribute 'eos_token'
#22 opened 11 days ago
by
itztheking

本地部署72b时,模型输出为空,怎么解决?
1
#21 opened 16 days ago
by
Cranegu
Qwen/Qwen2.5-VL-72B-Instruct
#20 opened 16 days ago
by
chnsmth

Could you please share the detailed parameters setting for the online demo?
#18 opened 24 days ago
by
harryzwh
vllm推理32k-128k输入
#17 opened 27 days ago
by
luckyZhangHu
official finetune example?
2
#16 opened about 1 month ago
by
erichartford

Anyone pls let me know what hardware can run 72B ?
2
#15 opened about 1 month ago
by
haoyiharrison

Fix model tree (remove loop)
#14 opened about 1 month ago
by
hekmon
batch inference error
1
#13 opened about 1 month ago
by
404dreamer
Error in preprocessing prompt inputs
#12 opened about 1 month ago
by
darvec
cannot import name 'Qwen2_5_VLImageProcessor' (on vLLM)
4
#11 opened about 1 month ago
by
cbrug
Update preprocessor_config.json
#10 opened about 2 months ago
by
Isotr0py

Hardware Requirements
#9 opened about 2 months ago
by
shreyas0985
Vision tokens missing from chat template
#8 opened about 2 months ago
by
depasquale

ERROR:hf-to-gguf:Model Qwen2_5_VLForConditionalGeneration is not supported
1
#7 opened about 2 months ago
by
li-gz
docs(readme): fix typo in README.md
#6 opened about 2 months ago
by
BjornMelin

Out of Memory on two H100 (80GB) each and load_in_8_bit = True
#4 opened 2 months ago
by
Maverick17

Model Memory Requirements
2
#3 opened 2 months ago
by
nvip1204
Video Inference - TypeError: process_vision_info() got an unexpected keyword argument 'return_video_kwargs'
2
#2 opened 2 months ago
by
hmanju
Qwen/Qwen2.5-VL-72B-Instruct-AWQ and Qwen/Qwen2.5-VL-40<B-Instruct-AWQ please
6
#1 opened 2 months ago
by
devops724