AI & ML interests

Computer Vision Technology and Data Collection for Anime Waifu

Recent Activity

radna 
published a model 2 days ago
prithivMLmods 
posted an update 3 days ago
view post
Post
2089
Introducing FLUX.2-Klein-LoRA-Studio, a demo for image editing using specialized LoRA adapters built for the FLUX.2-Klein-Distilled model. It features an edit-style gallery for multi-style image editing, including de-light, face swap, mannequin, and more. Try the demo below.

🤗Demo: prithivMLmods/FLUX.2-Klein-LoRA-Studio
🤗Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection
🤗GitHub: https://github.com/PRITHIVSAKTHIUR/FLUX.2-Klein-LoRA-Studio

To learn more, visit the app page or the respective model pages.
AbstractPhil 
posted an update 5 days ago
view post
Post
230
GeoFlow update — two training runs on the pentachoron geometric prior (4.8M params modulating frozen SD1.5).

10k ImageNet run fixed fragmented anatomy and spatial coherence in 7 minutes.

50k object-relations run taught actual compositional reasoning — "red cup on top of blue book" goes from a floating cup to correctly placed on the book.

Most interesting finding: learning happens in two phases. Object association locks first (~500 steps), spatial arrangement crystallizes after. You can watch it happen — "three candles in a triangle on a wooden tray" starts as candles side by side, then reorganizes into proper triangular formation. The tray itself rendered as a pentagon. Five vertices in, five sides out. The simplex is thinking in its own geometry.

Loss sits around 0.4 the entire time yet composition steadily improves. The prior nudges conditioning, it doesn't overwrite it.

Weights:
AbstractPhil/sd15-geoflow-object-association
Dataset:
AbstractPhil/synthetic-object-relations
Formulas:
AbstractPhil/sd15-rectified-geometric-matching

Next up — measuring the exact entropy decay inflection point across layers to enable branching the simplex into parallel paths with different anchor deviations. Geometric ensemble attention where the branches disagree on purpose.
prithivMLmods 
posted an update 6 days ago
view post
Post
792
GLM OCR, a multimodal OCR model for complex document understanding, built on the GLM-V encoder–decoder architecture. It delivers high accuracy and strong generalization with a blazing-fast inference pipeline. The demo is live . Try it now. 🤗🚀

✨ Demo: prithivMLmods/GLM-OCR-Demo
✨ Multimodal Implementations: https://huggingface.co/collections/prithivMLmods/multimodal-implementations
✨ GitHub: https://github.com/PRITHIVSAKTHIUR/GLM-OCR-Demo
prithivMLmods 
posted an update 7 days ago
view post
Post
2107
Introducing the Qwen-Image-Edit-3D-Lighting-Control app, featuring 8× horizontal and 3× elevational lighting positions for precise 3D lighting control. It enables studio-level lighting using fast Qwen Image Edit fast inference, paired with Multi-Angle-Lighting adapters. 🔦

🔥 Space: prithivMLmods/Qwen-Image-Edit-3D-Lighting-Control
✅ Collection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection
📂 GitHub: https://github.com/PRITHIVSAKTHIUR/Qwen-Image-Edit-3D-Lighting-Control
prithivMLmods 
posted an update 13 days ago
view post
Post
3601
Daggr UI version of the Qwen3-TTS demo.🔥
(custom voice, voice design, qwen3-asr and voice cloning) nodes.
No remote spaces used for API inference; all functions run in-app fn.
Powered by t4-m and built with [email protected] and gradio@6.

👉Demo: prithivMLmods/Qwen3-TTS-Daggr-UI
⭐Github: https://github.com/PRITHIVSAKTHIUR/Qwen3-TTS-Daggr-UI
  • 1 reply
·
AbstractPhil 
posted an update 15 days ago
view post
Post
2250
AbstractPhil/tinyflux-experts
Introducing the "blot" expert, sd15-flow-sol. The twin sister flow-matching experts for tinyflux-lailah; sd15-flow-lune AND sd15-flow-sol will be used in tandem to train tinyflux-Lailah. sd15-flow-sol never managed to reach full flow-matching prediction, so epsilon vpred conversion is required. All experts will exist within the tinyflux-experts repo, including all the critical checkpoint sets.
Lune was heavily finetuned in the sd3-style and adapted shift timestep system after David's interpolation converted sd15 into geometric basis.
Sol was left abandoned after 50 epochs with David and was considered overcooked and rigid, until I noticed the geometric structure today. Lune doesn't produce geometric structure as solid as Sol, not even close. Lune produces improved fidelity and detail, but Sol produces something very very different, aligned to sd15's behavior, and fully representative of the 5point 4simplex structure that David brought to the table.

Sol is essentially a nearly perfect blob-forming geometric blotter. Sol is SD15, and yet SOL was trained using a specific pattern recognizing and timestep aligned David model. David was tasked with classifying timesteps and patterns using complex deep-recognition structural analysis layer-by-layer, determining full-scale opinions after watching the entirety of sd15's structure during training.

Even though the sd15-flow-sol was left abandoned, the structure of Sol is HIGHLY effective at understanding TIMESTEP blotting interpolation. I didn't realize how crucially important this was until Lailah started to show rigidity and compartmentalized behavior with sequence - which likely happens to ALL flow matching models.

AbstractPhil/sd15-flow-matching

AbstractPhil/geo-david-collective-sd15-distilled
AbstractPhil/geo-david-collective-sd15-base-e40
  • 3 replies
·
prithivMLmods 
posted an update 16 days ago
view post
Post
2669
Qwen-Image-Edit-Object-Manipulator Space is now featured in Hugging Face Space of the Week. It enables object manipulation such as extracting objects, adding designs, and removing objects or designs from the red highlighted area using specialized adapters.

🔥Do enjoy the demo! ~ prithivMLmods/Qwen-Image-Edit-Object-Manipulator

Collections:
🧨Adapters-1: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-exps
🧨Adapters-2: https://huggingface.co/collections/prithivMLmods/qie-jan-23-26
🧨Adapters-3: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-object-manipulator

⭐Github: https://github.com/PRITHIVSAKTHIUR/Qwen-Image-Edit-Object-Manipulator

To learn more, visit the app page or the respective model pages.
AbstractPhil 
posted an update 18 days ago
view post
Post
969
Meet FluxLailah; AbstractPhil/tiny-flux-deep; 220m Flux variant currently pretraining at BF16. She is experimental, does not produce solid images yet - and yet she is producing. There is both an EMA and a raw weights pair producing different images. The EMA is particularly interesting at times.
Lailah uses flan-t5-base, clip-vit-l-14, and BlackForestLabs Flux1s VAE.
SEQ limit 128, images 512x512 for now. Lailah's early form is based on three variants. TinyFlux's weights were carefully planted into a deeper structure and trained yet again - dubbed TinyFlux-Deep. This variant has 15 dual-stream blocks and 25 single-stream blocks, nearly identical weight code as Flux with a similar attention mechanism - but intentionally deviant and compacted with careful consideration to scaling and purpose of mechanisms.
She went through quite a few growing pains with her earlier attention mechanism which required a reimagining today and careful consideration of the consequences, and now I present to you the preliminary look into Lailah.
The preliminary training is still heavily under way, the mechanisms are still being augmented, and her stability is currently being measured. The potential for fidelity, depth, and quality are still in measure - so I will be shifting attention and pivoting utility based on the needs over time.
  • 2 replies
·
prithivMLmods 
posted an update 19 days ago
view post
Post
3028
Introducing QIE-2511-Zoom-Master for highlight-guided area zoom-in, enabling lossless zooming within a drawn square area, and QIE-2511-Object-Remover-v2 for precise object or highlight-guided area cleanup. These experimental adapters are trained based on QIE-2511. Find the adapters below.

🕹️QIE-2511-Zoom-Master : prithivMLmods/QIE-2511-Zoom-Master
🕹️QIE-2511-Object-Remover-v2: prithivMLmods/QIE-2511-Object-Remover-v2

🤗Demo: prithivMLmods/Qwen-Image-Edit-Object-Manipulator

📂Collection: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-exps

To learn more, visit the app page or the respective model pages.
  • 2 replies
·
AbstractPhil 
posted an update 28 days ago
view post
Post
230
pytorch-parallel-compiler v0.5.0 upgrades:
*Complex benchmarking for wide primitive objects is supported now. This includes multiple presets for quick tests on hardware.
* All supported primitive either have validity checks or will have them.
* 6 new wide layers supported directly, and will be a key part to the autotuner before v1.0
* WideTracedModel is a preliminary auto-builder so the user doesn't need to build them manually by gathering layers.

https://github.com/AbstractEyes/pytorch-parallel-compiler

New Layers for 0.5.0:
WideGRU, WideLSTM, WideGroupNorm, WideMultiheadedAttention, WideInstancenorm1/2d, WideConv3d,

Upcoming for 1.0:
* WideTracedModel fully building any supported layer patterns with multiple autotune potentials for autoselection.
* Module cherry-picking for use-case only; E.G. WideLinear replace only benefits your case 35% while Attention reduces by 10% no attn.
* All (roughly 32 more) commonly used pytorch layer systems supported in one form or another with wide-batched kernels to benefit both eager and compiled, many of which require reworks or completely remaking them.
* Autotuning wide formats based on hardware response to the kernels. Kernel chunking for big slow processes such as LSTM, kernel fusion for small process with excess overhead, expanding kernels with masking to fit specific use-case paradigms with hardwares, and a series of smaller and more important optimizations along the way.
* Full transformer and rope support with wide-batched optimizations throughout the structures to allow more robust autoregression throughput.
* Additional Conv1d, Conv2d, and Conv3d optimizations.

>version 1.0 :
* Entire diffusion structures specifically kernelized for high-efficiency utilization with eager and compilation.
* Video diffusion specific targets meant to heavily reduce computation costs on the gpu and increase computation throughput on the gpu.
AbstractPhil 
posted an update about 1 month ago
view post
Post
242
The Long: this is a proof of concept; ensemble compilation vmap prototype is functional and can be used to increase throughput for wider batches on FFN, MLP, LLM, or other models than just ensembles. This system traces your model and creates stages of functional activation. Based on the stage it will combine or remove combinations of stages meant to assign your layers to batched functional calls meant to put pressure on your GPU with less loops with directly curated cudagraph compliance where applicable. Identical weights yield identical results at the cost of hardware and vram.

TLDR:
This is an ensemble optimization adapted to standard models. This will yield high-capacity speed improvements through increased throughput for inference and training alike using carefully traced staged vmap structures.

https://github.com/AbstractEyes/pytorch-parallel-compiler

The early list of layers isn't fully represented yet, so this is a preliminary look into the potentials of this structure when fully fleshed out.

MLP (N=100, batch=32, CUDA):
Eager:    2-3x speedup
Compiled: 35-40x speedup


ResBlock (N=20, batch=8, CUDA):
Eager:    ~5x speedup  
Compiled: ~10x speedup


This is early testing and so far the yields indicate that WIDENING your model with adjacent shared batched vmaps for uniformly staged models will yield considerably higher output for inference at the cost of additional hardware utilization.

This is akin to lining up all your systems and uniformly passing the necessary implications through a shared frozen representation gate.

Training for this is not tested nor supported yet, use at your own risk.
  • 1 reply
·