Datasets General collection for datasets, unsorted & largely unorganized teknium/OpenHermes-2.5 Viewer • Updated Apr 15, 2024 • 1M • 2.54k • 742 QuixiAI/dolphin Viewer • Updated Dec 18, 2023 • 3.73M • 1.21k • 420 HuggingFaceH4/no_robots Viewer • Updated Apr 18, 2024 • 10k • 1.46k • 490 jondurbin/bagel-v0.5 Viewer • Updated Apr 11, 2024 • 4.67M • 56 • 14
Local Function Calling Gems These are the best function calling LLMs one can run on less than 64GB VRAM/Unified Memory. I use these on a M1 Max Macbook 64GB. google/gemma-2-27b-it Text Generation • 27B • Updated Aug 27, 2024 • 96.6k • 546 google/gemma-2-9b-it Text Generation • 9B • Updated Aug 27, 2024 • 169k • • 728 01-ai/Yi-1.5-34B-Chat Text Generation • 34B • Updated Aug 27, 2024 • 13.2k • 272 zai-org/glm-4-9b-chat 9B • Updated Mar 13 • 94.4k • 686
Datasets General collection for datasets, unsorted & largely unorganized teknium/OpenHermes-2.5 Viewer • Updated Apr 15, 2024 • 1M • 2.54k • 742 QuixiAI/dolphin Viewer • Updated Dec 18, 2023 • 3.73M • 1.21k • 420 HuggingFaceH4/no_robots Viewer • Updated Apr 18, 2024 • 10k • 1.46k • 490 jondurbin/bagel-v0.5 Viewer • Updated Apr 11, 2024 • 4.67M • 56 • 14
Local Function Calling Gems These are the best function calling LLMs one can run on less than 64GB VRAM/Unified Memory. I use these on a M1 Max Macbook 64GB. google/gemma-2-27b-it Text Generation • 27B • Updated Aug 27, 2024 • 96.6k • 546 google/gemma-2-9b-it Text Generation • 9B • Updated Aug 27, 2024 • 169k • • 728 01-ai/Yi-1.5-34B-Chat Text Generation • 34B • Updated Aug 27, 2024 • 13.2k • 272 zai-org/glm-4-9b-chat 9B • Updated Mar 13 • 94.4k • 686