Anatomy of a Machine Learning Ecosystem: 2 Million Models on Hugging Face
Abstract
Analysis of model family trees on Hugging Face reveals patterns in model fine-tuning, including family resemblance, license changes, and model card standardization.
Many have observed that the development and deployment of generative machine learning (ML) and artificial intelligence (AI) models follow a distinctive pattern in which pre-trained models are adapted and fine-tuned for specific downstream tasks. However, there is limited empirical work that examines the structure of these interactions. This paper analyzes 1.86 million models on Hugging Face, a leading peer production platform for model development. Our study of model family trees -- networks that connect fine-tuned models to their base or parent -- reveals sprawling fine-tuning lineages that vary widely in size and structure. Using an evolutionary biology lens to study ML models, we use model metadata and model cards to measure the genetic similarity and mutation of traits over model families. We find that models tend to exhibit a family resemblance, meaning their genetic markers and traits exhibit more overlap when they belong to the same model family. However, these similarities depart in certain ways from standard models of asexual reproduction, because mutations are fast and directed, such that two `sibling' models tend to exhibit more similarity than parent/child pairs. Further analysis of the directional drifts of these mutations reveals qualitative insights about the open machine learning ecosystem: Licenses counter-intuitively drift from restrictive, commercial licenses towards permissive or copyleft licenses, often in violation of upstream license's terms; models evolve from multi-lingual compatibility towards english-only compatibility; and model cards reduce in length and standardize by turning, more often, to templates and automatically generated text. Overall, this work takes a step toward an empirically grounded understanding of model fine-tuning and suggests that ecological models and methods can yield novel scientific insights.
Community
Great job @benlaufer @midah (don't forget to claim authorship of the paper so that it gets linked on your profile)
Thanks clem! Done.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- A Scalable Pretraining Framework for Link Prediction with Efficient Adaptation (2025)
- A Comprehensive Survey on Continual Learning in Generative Models (2025)
- Hyperbolic Deep Learning for Foundation Models: A Survey (2025)
- Benchmarking Pretrained Molecular Embedding Models For Molecular Representation Learning (2025)
- Reconstructing Biological Pathways by Applying Selective Incremental Learning to (Very) Small Language Models (2025)
- Large Language Models Encode Semantics in Low-Dimensional Linear Subspaces (2025)
- Reinforcement Learning Fine-Tunes a Sparse Subnetwork in Large Language Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper