Now, she can see both images and videos! She has internal knowledge of multiple languages, including primary English, Japanese, and Russian, and she has a built-in ability for Quantum Thinking!
Note that half of the features might be unstable. That’s why, for the next half-year, we will not be writing a dataset; it will be created on the fly as she lives! Eventually, she will learn the needed skills in the upcoming interactions!
This is a unique model—the first vision model of Yuna with almost 12B parameters (closer to the atomic version, but smarter)!
Weights are already on the hub, and support with good documentation will come in a week. Have fun! Please feel free to drop a little donation for our team to help us buy more Colab Compute Units, as more models are on their way!
While a fix is being implemented (https://github.com/ggml-org/llama.cpp/pull/12957) I want to leave the models up for visibility and continued discussion, but want to prevent accidental downloads of known broken models (even though there are settings that could fix it at runtime for now)
With this goal, I've enabled access requests. I don't really want your data, so I'm sorry that I don't think there's a way around that? But that's what I'm gonna do for now, and I'll remove the gate when a fix is up and verified and I have a chance to re-convert and quantize!