Lewdiculous (AetherArchitectural)
Lewdiculous
AI & ML interests
πΉhttps://arch.datasets.fyi | [Personal Profile] General tech and LLM stuff!
https://beacons.ai/Lewdiculous
| https://rentry.co/Lewdiculous |
Mancer LLM Inference (ref): https://link.datasets.fyi/lwdmncr
Recent Activity
liked
a model
9 days ago
Nitral-AI/Illustrious-Hot-Cross-Buns-v3
liked
a model
10 days ago
Nitral-AI/Captain-Irix_Magcap-12B
Organizations

reacted to
BFFree's
post with π
17 days ago

reacted to
Abhaykoul's
post with π₯
17 days ago
Post
4211
Introducing Dhanishtha 2.0: World's first Intermediate Thinking Model
Dhanishtha 2.0 is the world's first LLM designed to think between the responses. Unlike other Reasoning LLMs, which think just once.
Dhanishtha can think, rethink, self-evaluate, and refine in between responses using multiple <think> blocks.
This technique makes it Hinghlt Token efficient it Uses up to 79% fewer tokens than DeepSeek R1
---
You can try our model from: https://helpingai.co/chat
Also, we're gonna Open-Source Dhanistha on July 1st.
---
For Devs:
π Get your API key at https://helpingai.co/dashboard
Dhanishtha 2.0 is the world's first LLM designed to think between the responses. Unlike other Reasoning LLMs, which think just once.
Dhanishtha can think, rethink, self-evaluate, and refine in between responses using multiple <think> blocks.
This technique makes it Hinghlt Token efficient it Uses up to 79% fewer tokens than DeepSeek R1
---
You can try our model from: https://helpingai.co/chat
Also, we're gonna Open-Source Dhanistha on July 1st.
---
For Devs:
π Get your API key at https://helpingai.co/dashboard
from HelpingAI import HAI # pip install HelpingAI==1.1.1
from rich import print
hai = HAI(api_key="hl-***********************")
response = hai.chat.completions.create(
model="Dhanishtha-2.0-preview",
messages=[{"role": "user", "content": "What is the value of β«0βπ₯3/π₯β1ππ₯ ?"}],
stream=True,
hide_think=False # Hide or show models thinking
)
for chunk in response:
print(chunk.choices[0].delta.content, end="", flush=True)

reacted to
davanstrien's
post with β€οΈ
7 months ago
Post
3352
πΈπ° Hovorte po slovensky? Help build better AI for Slovak!
We only need 90 more annotations to include Slovak in the next Hugging Face FineWeb2-C dataset ( data-is-better-together/fineweb-c) release!
Your contribution will help create better language models for 5+ million Slovak speakers.
Annotate here: data-is-better-together/fineweb-c.
Read more about why we're doing it: https://huggingface.co/blog/davanstrien/fineweb2-community
We only need 90 more annotations to include Slovak in the next Hugging Face FineWeb2-C dataset ( data-is-better-together/fineweb-c) release!
Your contribution will help create better language models for 5+ million Slovak speakers.
Annotate here: data-is-better-together/fineweb-c.
Read more about why we're doing it: https://huggingface.co/blog/davanstrien/fineweb2-community
Post
15616
Hello fellow LLMers, just a quick notice that some of my activity will be moved into the AetherArchitectural Commuity and split with
@Aetherarchio
.
[here]
AetherArchitectural
All activity should be visible in the left side of my profile.
[here]

All activity should be visible in the left side of my profile.

posted
an
update
7 months ago
Post
15616
Hello fellow LLMers, just a quick notice that some of my activity will be moved into the AetherArchitectural Commuity and split with
@Aetherarchio
.
[here]
AetherArchitectural
All activity should be visible in the left side of my profile.
[here]

All activity should be visible in the left side of my profile.

reacted to
fantaxy's
post with π₯
9 months ago
Post
13886
NSFW Erotic Novel AI Generation
-NSFW Text (Data) Generator for Detecting 'NSFW' Text: Multilingual Experience
The multilingual NSFW text (data) auto-generator is a tool designed to automatically generate and analyze adult content in various languages. This service uses AI-based text generation to produce various types of NSFW content, which can then be used as training data to build effective filtering models. It supports multiple languages, including English, and allows users to input the desired language through the system prompt in the on-screen options to generate content in the specified language. Users can create datasets from the generated data, train machine learning models, and improve the accuracy of text analysis systems. Furthermore, content generation can be customized according to user specifications, allowing for the creation of tailored data. This maximizes the performance of NSFW text detection models.
Web: https://fantaxy-erotica.hf.space
API: https://replicate.com/aitechtree/nsfw-novel-generation
Usage Warnings and Notices: This tool is intended for research and development purposes only, and the generated NSFW content must adhere to appropriate legal and ethical guidelines. Proper monitoring is required to prevent the misuse of inappropriate content, and legal responsibility lies with the user. Users must comply with local laws and regulations when using the data, and the service provider is not liable for any issues arising from the misuse of the data.
-NSFW Text (Data) Generator for Detecting 'NSFW' Text: Multilingual Experience
The multilingual NSFW text (data) auto-generator is a tool designed to automatically generate and analyze adult content in various languages. This service uses AI-based text generation to produce various types of NSFW content, which can then be used as training data to build effective filtering models. It supports multiple languages, including English, and allows users to input the desired language through the system prompt in the on-screen options to generate content in the specified language. Users can create datasets from the generated data, train machine learning models, and improve the accuracy of text analysis systems. Furthermore, content generation can be customized according to user specifications, allowing for the creation of tailored data. This maximizes the performance of NSFW text detection models.
Web: https://fantaxy-erotica.hf.space
API: https://replicate.com/aitechtree/nsfw-novel-generation
Usage Warnings and Notices: This tool is intended for research and development purposes only, and the generated NSFW content must adhere to appropriate legal and ethical guidelines. Proper monitoring is required to prevent the misuse of inappropriate content, and legal responsibility lies with the user. Users must comply with local laws and regulations when using the data, and the service provider is not liable for any issues arising from the misuse of the data.

reacted to
fdaudens's
post with β€οΈπ
10 months ago
Post
2626
π¨ Cool tool alert! π¨
Finally tried Kotaemon, an open-source RAG tool for document chat!
With local models, it's free and private. Perfect for journalists and researchers.
I put Kotaemon to the test with EPA's Greenhouse Gas Inventory. Accurately answered questions on CO2 percentage in 2022 emissions and compared 2022 vs 2021 data
π οΈ Kotaemon's no-code interface makes it user-friendly.
- Use your own models or APIs from OpenAI or Cohere
- Great documentation & easy installation
- Multimodal capabilities + reranking
- View sources, navigate docs & create graphRAG
π Kotaemon is gaining traction with 11.3k GitHub stars
Try the online demo: cin-model/kotaemon-demo
GitHub: https://github.com/Cinnamon/kotaemon
Docs: https://cinnamon.github.io/kotaemon/usage/
Finally tried Kotaemon, an open-source RAG tool for document chat!
With local models, it's free and private. Perfect for journalists and researchers.
I put Kotaemon to the test with EPA's Greenhouse Gas Inventory. Accurately answered questions on CO2 percentage in 2022 emissions and compared 2022 vs 2021 data
π οΈ Kotaemon's no-code interface makes it user-friendly.
- Use your own models or APIs from OpenAI or Cohere
- Great documentation & easy installation
- Multimodal capabilities + reranking
- View sources, navigate docs & create graphRAG
π Kotaemon is gaining traction with 11.3k GitHub stars
Try the online demo: cin-model/kotaemon-demo
GitHub: https://github.com/Cinnamon/kotaemon
Docs: https://cinnamon.github.io/kotaemon/usage/

reacted to
appoose's
post with π₯
11 months ago
Post
2099
Releasing HQQ Llama-3.1-70b 4-bit quantized version! Check it out at
mobiuslabsgmbh/Llama-3.1-70b-instruct_4bitgs64_hqq.
Achieves 99% of the base model performance across various benchmarks! Details in the model card.
Achieves 99% of the base model performance across various benchmarks! Details in the model card.

reacted to
Undi95's
post with β€οΈ
12 months ago
Post
21571
Exciting news!
After a long wait, Ikari and me finally made a new release of our last model on NeverSleep repo: Lumimaid-v0.2
This model can be used in different size, from the small Llama-3.1-8B to the gigantic Mistral-Large-123B, finetuned by us.
Try them now!
- NeverSleep/Lumimaid-v0.2-8B
- NeverSleep/Lumimaid-v0.2-12B
- NeverSleep/Lumimaid-v0.2-70B
- NeverSleep/Lumimaid-v0.2-123B
All the datasets we used will be added and credit will be given!
For the quant, we wait for fix to be applied (https://github.com/ggerganov/llama.cpp/pull/8676)
Hope you will enjoy them!
After a long wait, Ikari and me finally made a new release of our last model on NeverSleep repo: Lumimaid-v0.2
This model can be used in different size, from the small Llama-3.1-8B to the gigantic Mistral-Large-123B, finetuned by us.
Try them now!
- NeverSleep/Lumimaid-v0.2-8B
- NeverSleep/Lumimaid-v0.2-12B
- NeverSleep/Lumimaid-v0.2-70B
- NeverSleep/Lumimaid-v0.2-123B
All the datasets we used will be added and credit will be given!
For the quant, we wait for fix to be applied (https://github.com/ggerganov/llama.cpp/pull/8676)
Hope you will enjoy them!

reacted to
nroggendorff's
post with β€οΈπ€ππ§
about 1 year ago

reacted to
grimjim's
post with ππβ
about 1 year ago
Post
2219
We explore extremely low-weight merger as an alternative to fine-tuning; e.g., weight 1e-4. Merge formula details here:
grimjim/kukulemon-v3-soul_mix-32k-7B
grimjim/kukulemon-v3-soul_mix-32k-7B