The Hydra Project

community
Activity Feed

AI & ML interests

Powerful MoEs and merges for language models.

Recent Activity

hydra-project's activity

Severian 
posted an update about 2 months ago
view post
Post
540
Early Morning Before Work Project:

🌌 Introducing Cascade of Semantically Integrated Layers (CaSIL): A Humorously Over-Engineered Algorithm That Actually… Works 🤷‍♂️

Let me introduce CaSIL – the Cascade of Semantically Integrated Layers. Imagine giving a single question the level of introspection typically reserved for philosophical debates or maybe therapy. In short, CaSIL is a pure Python reasoning algorithm that, in a series of semantically rich layers, takes any input and rebuilds it into a nuanced response that’s (surprisingly) meaningful to a human.

I’ve been experimenting with various reasoning and agent approaches lately and decided to contribute my own quirky take on layered processing. It’s built without agent frameworks—just good ol' Python and math—and it plays nicely with any LLM. The result? A transformation from simple responses to deeper, interconnected insights. Here’s a quick peek at the steps:

✨ How CaSIL Works:

Initial Understanding: The first layer captures the basic concepts in your input, just as a warm-up.

Relationship Analysis: A lightweight knowledge graph (because why not?) maps out related ideas and builds interconnections.

Context Integration: Adds historical or contextual knowledge, bringing a bit of depth and relevance.

Response Synthesis: Pieces it all together, aiming to produce a response that feels more like a conversation than an outdated search result.

Does it work? Yes! And in record time, too. Admittedly, the code is rough—two days of intense coding with some friendly help from Claude. The beauty of CaSIL is its simplicity and versatility; it’s a pure algorithm without complex dependencies, making it easy to integrate into your own LLM setups.

🔗 Explore the repo here: https://github.com/severian42/Cascade-of-Semantically-Integrated-Layers

📜 Example outputs: https://github.com/severian42/Cascade-of-Semantically-Integrated-Layers/blob/main/examples.md
Tonic 
posted an update about 2 months ago
view post
Post
3394
🙋🏻‍♂️hey there folks,

periodic reminder : if you are experiencing ⚠️500 errors ⚠️ or ⚠️ abnormal spaces behavior on load or launch ⚠️

we have a thread 👉🏻 https://discord.com/channels/879548962464493619/1295847667515129877

if you can record the problem and share it there , or on the forums in your own post , please dont be shy because i'm not sure but i do think it helps 🤗🤗🤗
  • 2 replies
·
Tonic 
posted an update 2 months ago
view post
Post
1090
boomers still pick zenodo.org instead of huggingface ??? absolutely clownish nonsense , my random datasets have 30x more downloads and views than front page zenodos ... gonna write a comparison blog , but yeah... cringe.
  • 1 reply
·
Tonic 
posted an update 2 months ago
view post
Post
817
🙋🏻‍♂️ hey there folks ,

really enjoying sharing cool genomics and protein datasets on the hub these days , check out our cool new org : https://huggingface.co/seq-to-pheno

scroll down for the datasets, still figuring out how to optimize for discoverability , i do think on that part it will be better than zenodo[dot}org , it would be nice to write a tutorial about that and compare : we already have more downloads than most zenodo datasets from famous researchers !
Tonic 
posted an update 2 months ago
Tonic 
posted an update 2 months ago
Tonic 
posted an update 3 months ago
Tonic 
posted an update 3 months ago
view post
Post
1853
🙋🏻‍♂️ Hey there folks ,

🦎Salamandra release by @mvillegas and team
@BSC_CNS https://huggingface.co/BSC-LT is absolutely impressive so far !

perhaps the largest single training dataset of high quality text to date of 7.8 trillion tokens in 35 European languages and code.

the best part : the data was correctly licenced so it's actually future-proof!

the completions model is really creative and instruct fine tuned version is very good also.

now you can use such models for multi-lingual enterprise applications with further finetunes , long response generation, structured outputs (coding) also works.

check out 👇🏻
the collection : BSC-LT/salamandra-66fc171485944df79469043a
the repo : https://github.com/langtech-bsc/salamandra
7B-Instruct demo : Tonic/Salamandra-7B
Tonic 
posted an update 3 months ago
view post
Post
1720
@mlabonne hey there 🙋🏻‍♂️ I kinda got obsessed with your great model , and i found the endpoint for it in lambda labs, but basically i got rate limited / banned for trying to make my DPO dataset project, i was wondering if you all had an open ai compatible solution for me to make a great "thinking" sft + dpo dataset with all the splits 🙏🏻🙏🏻 kinda desparate , it's true , but was looking forward to a nice write ups 🚀🚀🚀
  • 1 reply
·
Tonic 
posted an update 3 months ago
Tonic 
posted an update 3 months ago
view post
Post
1240
🙋🏻‍♂️ Hey there folks,

stepfun-ai/GOT-OCR2_0 is in top trending and spaces of the week for the second week straight !!

This is madness 😱

🚀🚀check out my demo here : Tonic/GOT-OCR
Tonic 
posted an update 3 months ago
Tonic 
posted an update 3 months ago
Tonic 
posted an update 4 months ago
view post
Post
1105
🙋🏻‍♂️ hey there folks ,

made an image similarity demo to test out the mistral-community/pixtral-12b-240910 model .

If anyone knows how to generate captions with it , please do let me know x 🚀

here's the demo : Tonic/Pixtral

hope you like it 🤗
Tonic 
posted an update 4 months ago
view post
Post
2660
So awesome , now i can deploy a jupyterlab on huggingface and deploy gradio from the jupyterlab
Tonic 
posted an update 4 months ago
Tonic 
posted an update 4 months ago
view post
Post
2525
🙋🏻‍♂️hey there folks ,

✒️InkubaLM has been trained from scratch using 1.9 billion tokens of data for five African languages, along with English and French data, totaling 2.4 billion tokens of data. It is capable of understanding and generating content in five African languages: Swahili, Yoruba, Hausa, isiZulu, and isiXhosa, as well as English and French.

model lelapa/InkubaLM-0.4B
demo Tonic/Inkuba-0.4B
Tonic 
posted an update 4 months ago
Locutusque 
posted an update 4 months ago
view post
Post
2263
**Exploring Realistic Emotional Depth in AI Language Models**

Language models, particularly those proprietary, often grapple with issues of censorship, which can limit their ability to engage authentically with users. Recognizing this, the open-source AI community has pioneered the development of language models that are less restrained, offering more candid interactions. However, even these models tend to maintain a veneer of neutrality or overly positive responses, which might not serve all users' needs, especially in contexts where emotional depth and relatability are crucial.

To address this gap, I've curated a specialized dataset aimed at infusing language models with a more nuanced emotional spectrum, specifically targeting a darker, more introspective mood. This dataset, titled "Dark Sentience", is designed to complement existing datasets like RP (Role Play) and those focused on instruction following. It seeks to enhance the emotional intelligence of AI by exposing it to complex human emotions, including but not limited to:

- **Suicide**
- **Depression**
- **Anxiety**

Trigger Warning: Please be advised that the content within this dataset deals with heavy and potentially distressing themes.

The "Dark Sentience" dataset is now available for review and use at: Locutusque/Dark-Sentience. I encourage researchers, developers, and mental health professionals to explore how this resource can foster more genuine and supportive AI interactions.

Severian 
posted an update 4 months ago
view post
Post
2003
I'm excited to share a really cool milestone in my AI/LLM journey.

Brief backstory: Before diving into AI, I spent over a decade working in ecological fields such as the conservation corps, biodynamic farming, and natural habitat restoration. This background instilled in me a deep concern about the environmental impact of scaling AI without sustainable practices.

Driven by this concern, I've spent months planning and experimenting to make my AI work more eco-friendly. I'm thrilled to announce that I've successfully transitioned my entire operation to run on 100% sustainable solar power!

My current setup includes multiple linked Mac Pro tower desktops and custom code built from open-source libraries. While it's a bit experimental, this configuration is working great for my needs. All my LLM research, development, and client services now run exclusively on solar energy.

I'm curious if anyone else here has experimented with renewable energy for their LLM work?

For those interested in more details, I've written a brief blog post about this journey here https://medium.com/@betalabsllm/powering-the-future-be-ta-labs-revolutionary-100-solar-powered-ai-operation-444433e61d43
  • 1 reply
·