library,category,description Transformers,models,Pretrained ML models Datasets,datasets,Access and preprocess datasets of any type and size for model training Tokenizers,tokenization,Rust-based implementations of the most common tokenization algorithms for research and production Diffusers,models,Pretrained diffusion models pytorch-image-models (timm),models,Pretrained image models LeRobot,models,Pretrained robotics models and datasets Sentence Transformers,models,Pretrained text and image embedding models Accelerate,training,Unified distributed training framework PEFT,training,Parameter efficient fine-tuning methods for efficiently adapting large pretrained models to downstream tasks TRL,training,Reinforcement learning methods for training transformer models SetFit,training,Few-shot fine-tuning methods for Sentence Transformers nanotron,training,Lightweight 3D parallelism pretraining for transformer models (production) picotron,training,Lightweight 4D parallelism pretraining for transformer models (educational) Optimum,optimization,Optimized training and inference for Transformers on specific hardware Quanto,quantization,Quantize model weights and activations bitsandbytes,quantization,8 and 4-bit model quantization Text generation inference,serving,Deploy and serve high-performance large language models for inference Text embeddings inference,serving,Deploy and serve high-performance text embedding models for inference Gradio,ML apps,Build and share ML apps Argilla,datasets,Collaboration tool for building high-quality datasets Distilabel,datasets,Framework for generating synthetic data and AI feedback candle,ML framework,Rust-based ML framework for enabling serverless inference smolagents,agents,Build and run agents transformers.js,ML framework,JavaScript implementation of Transformers for browsers Safetensors,models,New format for safely storing tensors huggingface_hub,Hub,Python client for the Hugging Face Hub huggingface.js,Hub,JavaScript libraries for interacting with the Hub API Lighteval,evaluation,Evaluate LLMs across multiple backends