mobilemodels Delta-Vector/Hamanasu-4B-Instruct-KTO-V2 5B • Updated Apr 6 • 19 • 2 ArliAI/Gemma-2-2B-ArliAI-RPMax-v1.1 3B • Updated Oct 12, 2024 • 8 • 8 bartowski/Gemma-2-2B-ArliAI-RPMax-v1.1-GGUF Text Generation • 3B • Updated Sep 23, 2024 • 109 • 4 bartowski/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1-GGUF Text Generation • 4B • Updated Sep 11, 2024 • 108 • 4
bartowski/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1-GGUF Text Generation • 4B • Updated Sep 11, 2024 • 108 • 4
stuff Running 480 480 LLM Model VRAM Calculator 📈 Calculate VRAM requirements for running large language models Running 140 140 MLX My Repo 🐐 Convert and upload Hugging Face models to MLX format
Running 480 480 LLM Model VRAM Calculator 📈 Calculate VRAM requirements for running large language models
mobilemodels Delta-Vector/Hamanasu-4B-Instruct-KTO-V2 5B • Updated Apr 6 • 19 • 2 ArliAI/Gemma-2-2B-ArliAI-RPMax-v1.1 3B • Updated Oct 12, 2024 • 8 • 8 bartowski/Gemma-2-2B-ArliAI-RPMax-v1.1-GGUF Text Generation • 3B • Updated Sep 23, 2024 • 109 • 4 bartowski/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1-GGUF Text Generation • 4B • Updated Sep 11, 2024 • 108 • 4
bartowski/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1-GGUF Text Generation • 4B • Updated Sep 11, 2024 • 108 • 4
stuff Running 480 480 LLM Model VRAM Calculator 📈 Calculate VRAM requirements for running large language models Running 140 140 MLX My Repo 🐐 Convert and upload Hugging Face models to MLX format
Running 480 480 LLM Model VRAM Calculator 📈 Calculate VRAM requirements for running large language models