onnx model has additional unknown input

#7
by SantoshHF - opened

I have generated onnx using the following code.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM-1.7B"
device = "cpu" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))

torch.onnx.export(
model,
inputs, #values_tuple,
"HuggingFaceTB_SmolLM-1.7B.onnx",
opset_version=17

)

image.png

Why there is an additional input ?

Hugging Face TB Research org

FYI - We have exported ONNX versions of the model already, which you can use here: https://huggingface.co/HuggingFaceTB/SmolLM-1.7B/tree/main/onnx

thanks for prompt reply.
I have seen the onnx model https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct/blob/main/onnx/model.onnx

I don't see the tokenizer giving attention mask, position ids etc. Could you help me how can I pass a valid input here.
image.png

Sign up or log in to comment