Is the chat template correct? (issue for vLLM)
Is the chat template accurate to work with vLLM?
Looking at the chat_template under tokenizer_config.json, I see it only supports "content" and not "text", "image_url", "video_url".
If I send a request:
from openai import OpenAI
client = OpenAI(
# Replace the URL
base_url="sample_url",
api_key="NOT A REAL KEY",
)
chat_response = client.chat.completions.create(
model="OpenGVLab/InternVL3-78B",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://modelscope.oss-cn-beijing.aliyuncs.com/resource/qwen.png"
},
},
{"type": "text", "text": "What is the text in the illustrate?"},
],
},
],
)
I see following InternalServerError:
TypeError: can only concatenate str (not "list") to str
It works as expected if I send the following request
from openai import OpenAI
client = OpenAI(
base_url="sample_rul",
api_key="NOT A REAL KEY",
)
chat_completion = client.chat.completions.create(
model="OpenGVLab/InternVL3-78B",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": "Give me a short introduction to large language model.'",
},
],
temperature=0.01,
stream=False,
max_tokens=248,
)