-
JLKang/ViSpec-Qwen2.5-VL-3B-Instruct
Image-Text-to-Text • 0.4B • Updated • 19 -
JLKang/ViSpec-Qwen2.5-VL-7B-Instruct
Image-Text-to-Text • 0.9B • Updated • 59 -
JLKang/ViSpec-llava-v1.6-vicuna-7b-hf
Image-Text-to-Text • 0.5B • Updated • 21 -
JLKang/ViSpec-llava-v1.6-vicuna-13b-hf
Image-Text-to-Text • 0.7B • Updated • 4
Jialiang Kang
JLKang
AI & ML interests
Vision Language Models
Organizations
None yet
ViSpec
-
JLKang/ViSpec-Qwen2.5-VL-3B-Instruct
Image-Text-to-Text • 0.4B • Updated • 19 -
JLKang/ViSpec-Qwen2.5-VL-7B-Instruct
Image-Text-to-Text • 0.9B • Updated • 59 -
JLKang/ViSpec-llava-v1.6-vicuna-7b-hf
Image-Text-to-Text • 0.5B • Updated • 21 -
JLKang/ViSpec-llava-v1.6-vicuna-13b-hf
Image-Text-to-Text • 0.7B • Updated • 4
models 5
JLKang/ViSpec-llava-1.5-7b-hf
Image-Text-to-Text • 0.5B • Updated
• 6
JLKang/ViSpec-llava-v1.6-vicuna-13b-hf
Image-Text-to-Text • 0.7B • Updated
• 4
JLKang/ViSpec-llava-v1.6-vicuna-7b-hf
Image-Text-to-Text • 0.5B • Updated
• 21
JLKang/ViSpec-Qwen2.5-VL-7B-Instruct
Image-Text-to-Text • 0.9B • Updated
• 59
JLKang/ViSpec-Qwen2.5-VL-3B-Instruct
Image-Text-to-Text • 0.4B • Updated
• 19
datasets 0
None public yet