AWS Trainium & Inferentia documentation

๐Ÿš€ Tutorials: How To Fine-tune & Run LLMs

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

๐Ÿš€ Tutorials: How To Fine-tune & Run LLMs

Learn how to run and fine-tune models for optimal performance with AWS Trainium.

What youโ€™ll learn

These tutorials will guide you through the complete process of fine-tuning large language models on AWS Trainium:

  • ๐Ÿ“Š Data Preparation: Load and preprocess datasets for supervised fine-tuning
  • ๐Ÿ”ง Model Configuration: Set up LoRA adapters and distributed training parameters
  • โšก Training Optimization: Leverage tensor parallelism, gradient checkpointing, and mixed precision
  • ๐Ÿ’พ Checkpoint Management: Consolidate and merge model checkpoints for deployment
  • ๐Ÿš€ Model Deployment: Export and test your fine-tuned models for inference

Choose the tutorial that best fits your use case and start fine-tuning your LLMs on AWS Trainium today!