PHI-S: Distribution Balancing for Label-Free Multi-Teacher Distillation
Abstract
Various visual foundation models have distinct strengths and weaknesses, both of which can be improved through heterogeneous multi-teacher knowledge distillation without labels, termed "agglomerative models." We build upon this body of work by studying the effect of the teachers' activation statistics, particularly the impact of the loss function on the resulting student model quality. We explore a standard toolkit of statistical normalization techniques to better align the different distributions and assess their effects. Further, we examine the impact on downstream teacher-matching metrics, which motivates the use of Hadamard matrices. With these matrices, we demonstrate useful properties, showing how they can be used for isotropic standardization, where each dimension of a multivariate distribution is standardized using the same scale. We call this technique "PHI Standardization" (PHI-S) and empirically demonstrate that it produces the best student model across the suite of methods studied.
Community
Better way to weight multiple teachers during network distillation. Models are public!
See instructions: https://github.com/NVlabs/RADIO
Further, "PHI Standardization" is a general standardization technique for multivariate distributions of certain sizes (approximately 2-D, or 4*x-D dimensional). It standardizes all dimensions (mean 0, std 1) isotropically, meaning that the method is non-distorting. Could potentially be used to condition the outputs of models for ingesting into other learning pipelines (e.g. apply PHI-S normalization to the outputs of vision encoder(s) when integrating into VLM).
Another application could be in information retrieval, where you project high dimensional features to low D (e.g. using PCA). The problem once you've done that is most information is packed into the first few dimensions, and will degrade fidelity if you apply quantization. Instead, you can do "PCA(X -> Y) * Hadamard(Y)" to evenly spread the variance in the reduced space, which will then allow for higher quantization fidelity (See the QuaRot paper for similar application)).
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Student-Oriented Teacher Knowledge Refinement for Knowledge Distillation (2024)
- Kendall's $\tau$ Coefficient for Logits Distillation (2024)
- UNIC: Universal Classification Models via Multi-teacher Distillation (2024)
- BabyLlama-2: Ensemble-Distilled Models Consistently Outperform Teachers With Limited Data (2024)
- Enhancing Knowledge Distillation of Large Language Models through Efficient Multi-Modal Distribution Alignment (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 5
Browse 5 models citing this paperDatasets citing this paper 0
No dataset linking this paper