--- language: en license: apache-2.0 tags: - fine-tuned - gemma - lora - gemma-garage base_model: google/gemma-3-1b-pt pipeline_tag: text-generation --- # hh Fine-tuned google/gemma-3-1b-pt model from Gemma Garage This model was fine-tuned using [Gemma Garage](https://github.com/your-repo/gemma-garage), a platform for fine-tuning Gemma models with LoRA. ## Model Details - **Base Model**: google/gemma-3-1b-pt - **Fine-tuning Method**: LoRA (Low-Rank Adaptation) - **Training Platform**: Gemma Garage - **Fine-tuned on**: 2025-07-29 ## Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("LucasFMartins/hh") model = AutoModelForCausalLM.from_pretrained("LucasFMartins/hh") # Generate text inputs = tokenizer("Your prompt here", return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=100) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ## Training Details This model was fine-tuned using the Gemma Garage platform with the following configuration: - Request ID: 7bcb21d7-3fce-4992-904e-15f6c9d652f5 - Training completed on: 2025-07-29 17:04:48 UTC For more information about Gemma Garage, visit [our GitHub repository](https://github.com/your-repo/gemma-garage).