Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
1
4
12
Harsha Vardhan Khurdula
HV-Khurdula
Follow
0 followers
·
2 following
Khurdhula-Harshavardhan
harsha-vardhan-khurdula-99b400183
AI & ML interests
Transformers, Vision-Transformers, Multi-Agent Frameworks, Datatset Curation & Benchmarking.
Recent Activity
updated
a dataset
9 days ago
JigsawStack/fleurs_mlt_benchmark
published
a dataset
9 days ago
JigsawStack/fleurs_mlt_benchmark
reacted
to
Kseniase
's
post
with 🚀
16 days ago
13 New types of LoRA LoRA (Low-Rank Adaptation) is a popular lightweight method for fine-tuning AI models. It doesn't update the full model, it adds small trainable components, low-rank matrices, while keeping the original weights frozen. Only these adapters are trained. Recently, many interesting new LoRA variations came out, so it’s a great time to take a look at these 13 clever approaches: 1. T-LoRA → https://huggingface.co/papers/2507.05964 A timestep-dependent LoRA method for adapting diffusion models with a single image. It dynamically adjusts updates and uses orthogonal initialization to reduce overlap, achieving better fidelity–alignment balance than standard LoRA 2. SingLoRA → https://huggingface.co/papers/2507.05566 Simplifies LoRA by using only one small matrix instead of usual two, and multiplying it by its own transpose (like A × Aᵀ). It uses half the parameters of LoRA and avoids scale mismatch between different matrices 3. LiON-LoRA → https://huggingface.co/papers/2507.05678 Improves control and precision in video diffusion models when training data is limited. It builds on LoRA, adding 3 key principles: linear scalability, orthogonality, and norm consistency. A controllable token and modified self-attention enables smooth adjustment of motion 4. LoRA-Mixer → https://huggingface.co/papers/2507.00029 Combines LoRA and mixture-of-experts (MoE) to adapt LLMs for multiple tasks. It dynamically routes task-specific LoRA experts into linear projections of attention modules, supporting both joint training and frozen expert reuse 5. QR-LoRA → https://huggingface.co/papers/2507.04599 Separates content and style when combining multiple LoRA adapters. It implements QR decomposition to structure parameter updates, where the orthogonal Q matrix reduces interference between features, and the R matrix captures specific transformations Read further in the comments 👇 If you like it, also subscribe to the Turing Post: https://www.turingpost.com/subscribe
View all activity
Organizations
Papers
1
arxiv:
2411.15201
models
8
Sort: Recently updated
HV-Khurdula/speaker-segmentation-0.2-vox
Updated
Apr 8
•
5
HV-Khurdula/big-lama
Updated
Mar 31
•
5
HV-Khurdula/jigsawstack-segmentation-v0.2
Updated
Mar 28
•
2
HV-Khurdula/jigsawstack-segmentation-v0.1
0.0B
•
Updated
Mar 28
•
1
HV-Khurdula/qwen2-7b-instruct-trl-sft-ChartQA
Updated
Mar 6
HV-Khurdula/Llama-3.2-1B-Vision-Caption
2B
•
Updated
Nov 17, 2024
•
3
•
1
HV-Khurdula/Dua-Vision-Base
Image-Text-to-Text
•
0.2B
•
Updated
Oct 29, 2024
•
5
•
2
HV-Khurdula/llama-3.2-1b-hindi
Updated
Oct 23, 2024
•
3
datasets
0
None public yet