-
JLKang/ViSpec-Qwen2.5-VL-3B-Instruct
Image-Text-to-Text • 0.4B • Updated • 15 -
JLKang/ViSpec-Qwen2.5-VL-7B-Instruct
Image-Text-to-Text • 0.9B • Updated • 22 -
JLKang/ViSpec-llava-v1.6-vicuna-7b-hf
Image-Text-to-Text • 0.5B • Updated • 12 -
JLKang/ViSpec-llava-v1.6-vicuna-13b-hf
Image-Text-to-Text • 0.7B • Updated • 9
Jialiang Kang
JLKang
AI & ML interests
Vision Language Models
Organizations
None yet
ViSpec
-
JLKang/ViSpec-Qwen2.5-VL-3B-Instruct
Image-Text-to-Text • 0.4B • Updated • 15 -
JLKang/ViSpec-Qwen2.5-VL-7B-Instruct
Image-Text-to-Text • 0.9B • Updated • 22 -
JLKang/ViSpec-llava-v1.6-vicuna-7b-hf
Image-Text-to-Text • 0.5B • Updated • 12 -
JLKang/ViSpec-llava-v1.6-vicuna-13b-hf
Image-Text-to-Text • 0.7B • Updated • 9
models
5
JLKang/ViSpec-llava-1.5-7b-hf
Image-Text-to-Text
•
0.5B
•
Updated
•
15
JLKang/ViSpec-llava-v1.6-vicuna-13b-hf
Image-Text-to-Text
•
0.7B
•
Updated
•
9
JLKang/ViSpec-llava-v1.6-vicuna-7b-hf
Image-Text-to-Text
•
0.5B
•
Updated
•
12
JLKang/ViSpec-Qwen2.5-VL-7B-Instruct
Image-Text-to-Text
•
0.9B
•
Updated
•
22
JLKang/ViSpec-Qwen2.5-VL-3B-Instruct
Image-Text-to-Text
•
0.4B
•
Updated
•
15
datasets
0
None public yet