System prompt

#28
by ksooklall - opened

When running batch inference, I have to supply the system_prompt and prompt for every video like this:

message = [
        {"role": "system", "content": system_prompt},
        {"role": "user"}]

    content= [
            {
                "type": "video",
                "video": path,
                'resized_height': size,
                'resized_width': size,
                'merge_size': 2,
                'temporal_patch_size': 2,
                "fps": 1,
                "video_end": 30
            },
            {"type": "text", "text": prompt},
        ]

    message[1]['content']= content

Is there a way to set the system_prompt at the "model level" since it doesn't change for each video?

Sign up or log in to comment