Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
65.7
TFLOPS
5
24
Liang
dreamhope
Follow
21world's profile picture
1 follower
·
6 following
AI & ML interests
None yet
Recent Activity
upvoted
a
changelog
25 days ago
New Model Filtering Options on the Hub
liked
a model
29 days ago
nanonets/Nanonets-OCR-s
reacted
to
Kseniase
's
post
with 👍
about 1 month ago
12 Foundational AI Model Types Let’s refresh some fundamentals today to stay fluent in the what we all work with. Here are some of the most popular model types that shape the vast world of AI (with examples in the brackets): 1. LLM - Large Language Model (GPT, LLaMA) -> https://huggingface.co/papers/2402.06196 + history of LLMs: https://www.turingpost.com/t/The%20History%20of%20LLMs It's trained on massive text datasets to understand and generate human language. They are mostly build on Transformer architecture, predicting the next token. LLMs scale by increasing overall parameter count across all components (layers, attention heads, MLPs, etc.) 2. SLM - Small Language Model (TinyLLaMA, Phi models, SmolLM) https://huggingface.co/papers/2410.20011 Lightweight LM optimized for efficiency, low memory use, fast inference, and edge use. SLMs work using the same principles as LLMs 3. VLM - Vision-Language Model (CLIP, Flamingo) -> https://huggingface.co/papers/2405.17247 Processes and understands both images and text. VLMs map images and text into a shared embedding space or generate captions/descriptions from both 4. MLLM - Multimodal Large Language Model (Gemini) -> https://huggingface.co/papers/2306.13549 A large-scale model that can understand and process multiple types of data (modalities) — usually text + other formats, like images, videos, audio, structured data, 3D or spatial inputs. MLLMs can be LLMs extended with modality adapters or trained jointly across vision, text, audio, etc. 5. LAM - Large Action Model (InstructDiffusion, RT-2) -> https://huggingface.co/papers/2412.10047 Understands and generates action sequences by predicting action tokens (discrete/continuous instructions) that guide agents. Trained on behavior datasets, LAMs generalize across tasks, environments, and modalities - video, sensor data, etc. Read about LRM, MoE, SSM, RNN, CNN, SAM and LNN below👇 Also, subscribe to the Turing Post: https://www.turingpost.com/subscribe
View all activity
Organizations
dreamhope
's models
1
Sort: Recently updated
dreamhope/SmolLM2-360M-Instruct-openvino
Updated
May 1