Post
813

Based on a new hybrid architecture, these 350M, 700M, and 1.2B models are both fast and performant, ideal for on-device deployment.
I recommend fine-tuning them to power your next edge application. We already provide Colab notebooks to guide you. More to come soon!
π Blog post: https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models
π€ Models: LiquidAI/lfm2-686d721927015b2ad73eaa38