Can't run on a single H100

#27
by jvieirasobrinho - opened

I've been trying to run Llama-4-Scout-17B-16E on a single H100 but I keep getting the "CUDA out of memory" error. I'm not sure if I'm getting the quantization part right. I've been keeping an eye open on nvidia-smi as the model loads but memory usage seems under control. Could someone please advise?

from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch

model_name = "meta-llama/Llama-4-Scout-17B-16E"

bnb_config = BitsAndBytesConfig(load_in_4bit=True)

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map="auto"
)

prompt = "Explain the theory of relativity in simple terms."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(
**inputs,
max_new_tokens=200,
do_sample=True,
top_p=0.9,
temperature=0.7
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("\nResponse:\n", response)

Thanks in advance!

Even on 2 H100 it is not working. They lied to us.

@kingabzpro that does seem to be the case... I've also tried with 2 H100s, but still no luck. 😕

@jvieirasobrinho I even tried on H200. No luck. I guess I will try 2 H200 next. Man I am loosing money on Runpod.

Same test on my side with instruct version and int4 quantization on 1 or 2 h100 doesn't work

Same here, out of memory running in single H100 with vLLM. ( torch.OutOfMemoryError: CUDA out of memory)

Sign up or log in to comment