Feature: Visual Instruction Following 範例程式修改,針對_inference

#5
by fatcatcat - opened

_inference 函數內的pixel_values不應該只有 .to(model.dtype) ,還要加上 .to(model.device)
修改如下

def _inference(tokenizer, model, generation_config, prompt, pixel_values=None):
    inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
    if pixel_values is None:
        output_tensors = model.generate(**inputs, generation_config=generation_config)
    else:
        pixel_values=pixel_values.to(model.dtype).to(model.device) # Add This Line
        output_tensors = model.generate(**inputs, generation_config=generation_config, pixel_values=pixel_values) # Change This Line
    output_str = tokenizer.decode(output_tensors[0])
    return output_str

不做此修改會報錯 ( Weight & Input 不在同一device上 )

image.png

MediaTek Research org

Thank for your suggestion.

I update the demo case with

def _inference(tokenizer, model, generation_config, prompt, pixel_values=None):
    inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
    if pixel_values is None:
        output_tensors = model.generate(**inputs, generation_config=generation_config)
    else:
        output_tensors = model.generate(**inputs, generation_config=generation_config, pixel_values=pixel_values.to(model.device, dtype=model.dtype))
    output_str = tokenizer.decode(output_tensors[0])
    return output_str
YC-Chen changed discussion status to closed

Sign up or log in to comment