problems using this model in google colab

#27
by Matteo101 - opened

when I copy and paste this
"from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("AIDC-AI/Marco-o1")
model = AutoModelForCausalLM.from_pretrained("AIDC-AI/Marco-o1")"
in google colab, it doesn't load it completely because it tells me RAM used completely (despite using GPU) can you help me?

the problem is that google colab lets me connect to a GPU runtime but it doesn't use the GPU, can you help me?

What GPU are you using? Paid or Free?

@Matteo101 Yeah, I tried loading it directly like you did but it failed to engage the GPU. I even moved device to GPU but it kept using only CPU. I have not had time to properly review the model implementation or official documentation to know why. For now, I got it to load by reducing the precision to float 16. It is now using about 13GB of VRAM:

class ModelWrapper:
    def __init__(self, model_name):
        self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        # Load model with half-precision if supported, or use device_map for efficient placement
        try:
            self.model = AutoModelForCausalLM.from_pretrained(
                model_name, 
                torch_dtype=torch.float16 if torch.cuda.is_available() else None, 
                device_map="auto"
            )
        except Exception as e:
            print(f"Error loading model: {e}")
            raise
        self.tokenizer = AutoTokenizer.from_pretrained(model_name)

        # Enable gradient checkpointing for large models
        self.model.gradient_checkpointing_enable()

        # Debug: Check if model is on GPU
        print(f"Model loaded to device: {next(self.model.parameters()).device}")

    def generate_text(self, prompt, max_length=100, num_return_sequences=1):
        inputs = self.tokenizer(prompt, return_tensors="pt")
        inputs = {key: value.to(self.device) for key, value in inputs.items()}  # Move inputs to GPU
        outputs = self.model.generate(
            **inputs, max_length=max_length, num_return_sequences=num_return_sequences
        )
        generated_texts = [
            self.tokenizer.decode(output, skip_special_tokens=True) for output in outputs
        ]
        return generated_texts

Results:

Model loaded to device: cuda:0
Generated Text 1:
Once upon a time, in a land far, far away, there was a kingdom with a unique rule: the king could only be chosen if he had at least one sibling. This rule was based on an ancient prophecy that stated, "The kingdom

Screenshot 2024-12-12 at 15.09.20.png

Sign up or log in to comment