| Philosophy | |
| 🤗 Transformers is an opinionated library built for: | |
| machine learning researchers and educators seeking to use, study or extend large-scale Transformers models. | |
| hands-on practitioners who want to fine-tune those models or serve them in production, or both. | |
| engineers who just want to download a pretrained model and use it to solve a given machine learning task. | |
| The library was designed with two strong goals in mind: | |
| Be as easy and fast to use as possible: | |
| We strongly limited the number of user-facing abstractions to learn, in fact, there are almost no abstractions, | |
| just three standard classes required to use each model: configuration, | |
| models, and a preprocessing class (tokenizer for NLP, image processor for vision, feature extractor for audio, and processor for multimodal inputs). | |
| All of these classes can be initialized in a simple and unified way from pretrained instances by using a common | |
| from_pretrained() method which downloads (if needed), caches and | |
| loads the related class instance and associated data (configurations' hyperparameters, tokenizers' vocabulary, | |
| and models' weights) from a pretrained checkpoint provided on Hugging Face Hub or your own saved checkpoint. | |
| On top of those three base classes, the library provides two APIs: [pipeline] for quickly | |
| using a model for inference on a given task and [Trainer] to quickly train or fine-tune a PyTorch model (all TensorFlow models are compatible with Keras.fit). | |
| As a consequence, this library is NOT a modular toolbox of building blocks for neural nets. |