Upload fine-tuned model from Gemma Garage (request: 09154b2b-316a-4310-960f-b7d5a77df291)
f63c49e
verified
language: en | |
license: apache-2.0 | |
tags: | |
- fine-tuned | |
- gemma | |
- lora | |
- gemma-garage | |
base_model: google/gemma-3-1b-pt | |
pipeline_tag: text-generation | |
# h | |
Fine-tuned google/gemma-3-1b-pt model from Gemma Garage | |
This model was fine-tuned using [Gemma Garage](https://github.com/your-repo/gemma-garage), a platform for fine-tuning Gemma models with LoRA. | |
## Model Details | |
- **Base Model**: google/gemma-3-1b-pt | |
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation) | |
- **Training Platform**: Gemma Garage | |
- **Fine-tuned on**: 2025-07-28 | |
## Usage | |
```python | |
from transformers import AutoTokenizer, AutoModelForCausalLM | |
tokenizer = AutoTokenizer.from_pretrained("LucasFMartins/h") | |
model = AutoModelForCausalLM.from_pretrained("LucasFMartins/h") | |
# Generate text | |
inputs = tokenizer("Your prompt here", return_tensors="pt") | |
outputs = model.generate(**inputs, max_new_tokens=100) | |
response = tokenizer.decode(outputs[0], skip_special_tokens=True) | |
print(response) | |
``` | |
## Training Details | |
This model was fine-tuned using the Gemma Garage platform with the following configuration: | |
- Request ID: 09154b2b-316a-4310-960f-b7d5a77df291 | |
- Training completed on: 2025-07-28 14:37:08 UTC | |
For more information about Gemma Garage, visit [our GitHub repository](https://github.com/your-repo/gemma-garage). | |