Using two GPUs
#4
by
svenh99
- opened
Is it possible to use this model with 2 GPUs, e.g. two GPUs with 16 GB RAM each? Without any special commands, only one GPU is used, the second GPU is not used (although listed by nvidia-smi
) and the command model = AutoModelForCausalLM.from_pretrained(model_id)
fails when reaching 16 GB GPU RAM.
I am quite new to PyTorch, so please excuse my ignorance ...
Hi
@svenh99
Sure yes, that is totally possible - you need to simply pass device_map="auto"
in from_pretrained
, here is what that parameter does under the hood: https://huggingface.co/docs/accelerate/concept_guides/big_model_inference#loading-weights if you want to learn more about it!
Thanks for the answer and the link.
svenh99
changed discussion status to
closed