1.png

Lumian-VLR-7B-Thinking

The Lumian-VLR-7B-Thinking model is a high-fidelity vision-language reasoning (experimental model) system designed for fine-grained multimodal understanding. Built on Qwen2.5-VL-7B-Instruct, this model enhances image captioning, sampled video reasoning, and document comprehension through explicit grounded reasoning. It produces structured reasoning traces aligned with visual coordinates, enabling explainable multimodal reasoning. Trained via supervised fine-tuning (SFT) on visually-grounded reasoning traces and further refined using GRPO reinforcement learning, Lumian delivers superior step-by-step chain-of-thought reasoning with strong visual grounding.

Model Subfolder: Lumian-VLR-7B-Thinking(think-preview)

Model Folder: Lumian-VLR-7B-Thinking(no-think-single-shot)

Quick Start with Transformers(think-preview)🤗

pip install git+https://github.com/huggingface/transformers.git
# Load Lumian-VLR-7B-Thinking
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info

MODEL_ID = "prithivMLmods/Lumian-VLR-7B-Thinking"
SUBFOLDER = "think-preview"
processor = AutoProcessor.from_pretrained(MODEL_ID, trust_remote_code=True, subfolder=SUBFOLDER)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    MODEL_ID,
    trust_remote_code=True,
    subfolder=SUBFOLDER,
    torch_dtype=torch.float16
).to(device).eval()

Key Enhancements

  • Visually-Grounded Reasoning and Thinking Traces: Generates explicit reasoning traces tied to image regions and document structures for transparent and explainable outputs.
  • Advanced Image Captioning: Produces detailed, grounded captions with reasoning steps for improved scene understanding.
  • Sampled Video Reasoning: Handles long-duration videos with temporal reasoning for question answering and summarization.
  • Context-Aware Document Analysis: Excels at structured and unstructured content extraction with visual grounding.
  • Fine-Grained Visual Grounding: Accurately links reasoning steps to tables, charts, and graphical elements.
  • Reinforcement-Learned Thinking: GRPO training incentivizes accurate, grounded reasoning with minimal hallucinations.

Colab Demo : https://huggingface.co/prithivMLmods/Lumian-VLR-7B-Thinking/blob/main/think-preview/Lumian-VLR-7B-Thinking-Demo-Notebook/Lumian-VLR-7B-Thinking.ipynb

Thinking Traces

The model outputs reasoning and answers in a structured format:

<think>
Step 1: Identify the main elements in the image and their positions.
Step 2: Analyze the relationships between objects and surrounding context.
Step 3: Derive the final answer based on spatial reasoning and visual cues.
</think>

<answer>
The image depicts a person holding an open book with highlighted sections on the left page.
</answer>

Quick Start with Transformers(single-shot)

from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info

model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
    "prithivMLmods/Lumian-VLR-7B-Thinking", torch_dtype="auto", device_map="auto"
)

processor = AutoProcessor.from_pretrained("prithivMLmods/Lumian-VLR-7B-Thinking")

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Describe this image with thinking traces."},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=256)
generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

Intended Use

  • Visual reasoning with grounded, step-by-step thinking traces.
  • Explainable image captioning and sampled video reasoning.
  • Multimodal document retrieval, extraction, and analytical interpretation.
  • Transparent chain-of-thought reasoning for educational, research, and enterprise use.
  • Multilingual reasoning and structured content extraction.
  • Robotic and mobile vision-based automation with grounded decision-making.

Limitations

  • High memory requirements for long videos and large document batches.
  • Degraded accuracy on extremely low-resolution or obscured visuals.
  • Suboptimal for real-time inference on edge devices.
  • Visual token configuration strongly influences reasoning fidelity.
  • Occasional reasoning drift or partial grounding errors.

References

Downloads last month
31
Safetensors
Model size
8.29B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Lumian-VLR-7B-Thinking

Finetuned
(493)
this model
Quantizations
2 models

Space using prithivMLmods/Lumian-VLR-7B-Thinking 1

Collection including prithivMLmods/Lumian-VLR-7B-Thinking