YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Fine-tuned Model: Legal-gemma3-27b-pt-lora
π Training Configuration
- data_path:
QomSSLab/Legal_DS_PT
- text_column:
text
- output_dir:
gemma327b_lora_chckpnts
- new_model_name:
Legal-gemma3-27b-pt-lora
- model_name:
gemma-3-12b-pt-ForCausalLM
- use_4bit:
False
- use_lora:
True
- max_seq_length:
1000
- batch_size:
1
- gradient_accu:
4
- epochs:
3
- learning_rate:
2e-05
- lora_alpha:
256
- lora_drop:
0.05
- lora_r:
256
- tune_embedding_layer:
False
- hf_token:
********
- resume_from_checkpoint:
False
- use_8bit_optimizer:
True
- push_to_hub:
True
Auto-generated after training.
- Downloads last month
- 11
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support