👨🍳Cooking with HF: My Recipe Contribution
Collection
My HF recipe & model: https://huggingface.co/learn/cookbook/optuna_hpo_with_transformers
•
1 item
•
Updated
This model was fine-tuned using Optuna-based hyperparameter optimization on a downstream NLP task with the Hugging Face Transformers library. The objective was to systematically search for optimal training configurations (e.g., learning rate, weight decay, batch size) to maximize model performance on the validation set.
Recipe Source | Hugging Face Cookbook: Optuna HPO with Transformers |
---|---|
Frameworks | Transformers, Optuna, PyTorch |
Task | Text classification (can generalize to other supervised NLP tasks) |
✅ Text classification ✅ Token classification (NER) ✅ Sequence-to-sequence (if adapted) ✅ Any model supported by Transformers’ Trainer API
The Optuna study explored:
The pipeline optimizes:
Hyperparameter | Best Value |
---|---|
Learning rate | ~2.3e-5 |
Weight decay | ~0.18 |
Batch size | 16 |
Validation Accuracy | ~88% |
Note: Results vary by random seed and compute budget.
See full example in the Hugging Face Cookbook Recipe.
Base model
prajjwal1/bert-tiny