--- base_model: - sapientinc/HRM-checkpoint-ARC-2 - sapientinc/HRM-checkpoint-sudoku-extreme - sapientinc/HRM-checkpoint-maze-30x30-hard - google/flan-t5-small --- HRM-LLM: A truly decentralized, human-like reasoning model built by the community HRM-LLM is a community-driven large language model powered by the Hierarchical Reasoning Model (HRM) architecture. It aims to be truly decentralized: anyone can train, contribute, and scale it forward from anywhere. HRM-LLM is designed to think and work like a human—iterating, refining, and allocating compute adaptively—so it learns efficiently and generalizes across tasks. Why HRM-LLM? - Human-like reasoning core: HRM brings hierarchical representations and adaptive computation to mimic iterative human thinking and planning. - Adaptive Computation Time (ACT): The model dynamically decides how much “thought” to spend per token—more for hard tokens, less for easy ones. - Decentralized and scalable: Anyone can hop in, train a few steps, and push a unified checkpoint to the Hub. Every contribution compounds. - Simple, hackable stack: PyTorch + Transformers + Datasets. Easy to extend, easy to improve. - Community-aligned progress: Transparent training, open checkpoints, and community governance. What this model aims to do - Break down complex problems into stages, reason across them, and refine answers over multiple internal steps. - Learn efficient patterns via ACT, saving compute where possible and spending it where it matters most. - Become a robust, general-purpose assistant shaped by its global community of contributors. How you can help - Train a few steps in Colab (or locally) and push your contribution. - Experiment with hyperparameters, tokenizers, datasets, or new HRM blocks. - Share insights and logs to improve the next iteration. License - This project is licensed under Apache-2.0. You’re free to use, modify, and distribute—with attribution and notice. Jump in and train - Colab (1-click): https://colab.research.google.com/drive/1xZNYC-yhwdJxzbpwRekE_rDjTki5CvEv?usp=sharing Quick start: contribute training from your environment Run this to join training and push your contribution to the shared checkpoint. That’s it—share the Colab link, invite contributors, and let the community grow HRM-LLM together.