kubistmi commited on
Commit
7250429
·
verified ·
1 Parent(s): 405c5ac

Fix code typo in code examples

Browse files

Hello 👋
Based on [this PR for 2.2B version](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct/discussions/9), this is the same change just for a different model (I will open one more for 500M 😉).

I have included param `dtype=torch.bfloat16` when moving the processed inputs to CUDA (this is included in some examples, but not all).
This should avoid mismatched tensor types between model weights and inputs.

Cheers,
Michal

Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -126,7 +126,7 @@ inputs = processor.apply_chat_template(
126
  tokenize=True,
127
  return_dict=True,
128
  return_tensors="pt",
129
- ).to(model.device)
130
 
131
  generated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=64)
132
  generated_texts = processor.batch_decode(
@@ -162,7 +162,7 @@ inputs = processor.apply_chat_template(
162
  tokenize=True,
163
  return_dict=True,
164
  return_tensors="pt",
165
- ).to(model.device)
166
 
167
  generated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=64)
168
  generated_texts = processor.batch_decode(
 
126
  tokenize=True,
127
  return_dict=True,
128
  return_tensors="pt",
129
+ ).to(model.device, dtype=torch.bfloat16)
130
 
131
  generated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=64)
132
  generated_texts = processor.batch_decode(
 
162
  tokenize=True,
163
  return_dict=True,
164
  return_tensors="pt",
165
+ ).to(model.device, dtype=torch.bfloat16)
166
 
167
  generated_ids = model.generate(**inputs, do_sample=False, max_new_tokens=64)
168
  generated_texts = processor.batch_decode(