SambaNova just released a revolutionary paper about how the SN40L AI chip can host many LLMs on a single node and run inference so efficiently that it enables running a "composition of experts." These experts can be interconnected via a router, resulting in remarkable accuracy. This method allows you to take open source expert models from HuggingFace and continuously build and integrate them into a composition of experts.
I am also super excited about the possibilities that SN40Ls unlock for LLM agent workflows and pipelined calls. With the release of GPT4o, it seems that monolithic LLMs are starting to reach a plateau, and I believe that the next wave of AI will be driven by pipelined LLM calls and agent workflows. Most pipelined LLM workflows are bottlenecked by prohibitively expensive compute and high latency, but the SN40L provides a one stop shop solution for this. We need to get the word out to the community that this hardware exists, because it will open up a realm of possibilities that developers working with Nvidia hardware did not know exist.