Nicolay Rusnachenko's picture

Nicolay Rusnachenko

nicolay-r

AI & ML interests

Information Retrieval・Medical Multimodal NLP (πŸ–Ό+πŸ“) Research Fellow @BU_Research・software developer http://arekit.io・PhD in NLP

Recent Activity

reacted to singhsidhukuldeep's post with 🧠 about 13 hours ago
Exciting News in AI: JinaAI Releases JINA-CLIP-v2! The team at Jina AI has just released a groundbreaking multilingual multimodal embedding model that's pushing the boundaries of text-image understanding. Here's why this is a big deal: πŸš€ Technical Highlights: - Dual encoder architecture combining a 561M parameter Jina XLM-RoBERTa text encoder and a 304M parameter EVA02-L14 vision encoder - Supports 89 languages with 8,192 token context length - Processes images up to 512Γ—512 pixels with 14Γ—14 patch size - Implements FlashAttention2 for text and xFormers for vision processing - Uses Matryoshka Representation Learning for efficient vector storage ⚑️ Under The Hood: - Multi-stage training process with progressive resolution scaling (224β†’384β†’512) - Contrastive learning using InfoNCE loss in both directions - Trained on massive multilingual dataset including 400M English and 400M multilingual image-caption pairs - Incorporates specialized datasets for document understanding, scientific graphs, and infographics - Uses hard negative mining with 7 negatives per positive sample πŸ“Š Performance: - Outperforms previous models on visual document retrieval (52.65% nDCG@5) - Achieves 89.73% image-to-text and 79.09% text-to-image retrieval on CLIP benchmark - Strong multilingual performance across 30 languages - Maintains performance even with 75% dimension reduction (256D vs 1024D) 🎯 Key Innovation: The model solves the long-standing challenge of unifying text-only and multi-modal retrieval systems while adding robust multilingual support. Perfect for building cross-lingual visual search systems! Kudos to the research team at Jina AI for this impressive advancement in multimodal AI!
View all activity

Organizations

None yet

nicolay-r's activity

reacted to DawnC's post with ❀️ about 13 hours ago
view post
Post
666
🌟 PawMatchAI: Making Breed Selection More Intuitive! πŸ•
Excited to share the latest update to this AI-powered companion for finding your perfect furry friend! The breed recommendation system just got a visual upgrade to help you make better decisions.

✨ What's New?
Enhanced breed recognition accuracy through strategic model improvements:
- Upgraded to a fine-tuned ConvNeXt architecture for superior feature extraction
- Implemented progressive layer unfreezing during training
- Optimized data augmentation pipeline for better generalization
- Achieved 8% improvement in breed classification accuracy

🎯 Key Features:
- Smart breed recognition powered by AI
- Visual matching scores with intuitive color indicators
- Detailed breed comparisons with interactive tooltips
- Lifestyle-based recommendations tailored to your needs

πŸ’­ Project Vision
Combining my passion for AI and pets, this project represents another step toward my goal of creating meaningful AI applications. Each update aims to make the breed selection process more accessible while improving the underlying technology.

πŸ‘‰ Try it now: DawnC/PawMatchAI

Your likes ❀️ on this space fuel this project's growth!

#AI #MachineLearning #DeepLearning #Pytorch #ComputerVision
See translation
reacted to sayakpaul's post with πŸš€ about 13 hours ago
view post
Post
1519
Commits speak louder than words πŸ€ͺ

* 4 new video models
* Multiple image models, including SANA & Flux Control
* New quantizers -> GGUF & TorchAO
* New training scripts

Enjoy this holiday-special Diffusers release πŸ€—
Notes: https://github.com/huggingface/diffusers/releases/tag/v0.32.0
reacted to singhsidhukuldeep's post with 🧠 about 13 hours ago
view post
Post
1134
Exciting News in AI: JinaAI Releases JINA-CLIP-v2!

The team at Jina AI has just released a groundbreaking multilingual multimodal embedding model that's pushing the boundaries of text-image understanding. Here's why this is a big deal:

πŸš€ Technical Highlights:
- Dual encoder architecture combining a 561M parameter Jina XLM-RoBERTa text encoder and a 304M parameter EVA02-L14 vision encoder
- Supports 89 languages with 8,192 token context length
- Processes images up to 512Γ—512 pixels with 14Γ—14 patch size
- Implements FlashAttention2 for text and xFormers for vision processing
- Uses Matryoshka Representation Learning for efficient vector storage

⚑️ Under The Hood:
- Multi-stage training process with progressive resolution scaling (224β†’384β†’512)
- Contrastive learning using InfoNCE loss in both directions
- Trained on massive multilingual dataset including 400M English and 400M multilingual image-caption pairs
- Incorporates specialized datasets for document understanding, scientific graphs, and infographics
- Uses hard negative mining with 7 negatives per positive sample

πŸ“Š Performance:
- Outperforms previous models on visual document retrieval (52.65% nDCG@5)
- Achieves 89.73% image-to-text and 79.09% text-to-image retrieval on CLIP benchmark
- Strong multilingual performance across 30 languages
- Maintains performance even with 75% dimension reduction (256D vs 1024D)

🎯 Key Innovation:
The model solves the long-standing challenge of unifying text-only and multi-modal retrieval systems while adding robust multilingual support. Perfect for building cross-lingual visual search systems!

Kudos to the research team at Jina AI for this impressive advancement in multimodal AI!
reacted to ginipick's post with πŸ”₯ about 13 hours ago
view post
Post
1279
🎨 GiniGen Canvas-o3: Intelligent AI-Powered Image Editing Platform
Transform your images with precision using our next-generation tool that lets you extract anything from text to objects with simple natural language commands! πŸš€
πŸ“Œ Key Differentiators:

Intelligent Object Recognition & Extraction
β€’ Freedom to select any target (text, logos, objects)
β€’ Simple extraction via natural language commands ("dog", "signboard", "text")
β€’ Ultra-precise segmentation powered by GroundingDINO + SAM
Advanced Background Processing
β€’ AI-generated custom backgrounds for extracted objects
β€’ Intuitive object size/position adjustment
β€’ Multiple aspect ratio support (1:1, 16:9, 9:16, 4:3)
Progressive Text Integration
β€’ Dual text placement: over or behind images
β€’ Multi-language font support
β€’ Real-time font style/size/color/opacity adjustment

🎯 Use Cases:

Extract logos from product images
Isolate text from signboards
Select specific objects from scenes
Combine extracted objects with new backgrounds
Layer text in front of or behind images

πŸ’« Technical Features:

Natural language-based object detection
Real-time image processing
GPU acceleration & memory optimization
User-friendly interface

πŸŽ‰ Key Benefits:

User Simplicity: Natural language commands for object extraction
High Precision: AI-powered accurate object recognition
Versatility: From basic editing to advanced content creation
Real-Time Processing: Instant result visualization

Experience the new paradigm of image editing with GiniGen Canvas-o3:

Seamless integration of multiple editing functions
Professional-grade results with consumer-grade ease
Perfect for social media, e-commerce, and design professionals

Whether you're extracting text from complex backgrounds or creating sophisticated visual content, GiniGen Canvas-o3 provides the precision and flexibility you need for modern image editing!

GO! ginigen/CANVAS-o3
  • 2 replies
Β·
reacted to InferenceIllusionist's post with πŸ”₯ 1 day ago
view post
Post
1835
MilkDropLM-32b-v0.3: Unlocking Next-Gen Visuals ✨

Stoked to release the latest iteration of our MilkDropLM project! This new release is based on the powerful Qwen2.5-Coder-32B-Instruct model using the same great dataset that powered our 7b model.

What's new?

- Genome Unlocked: Deeper understanding of preset relationships for more accurate and creative generations.

- Preset Revival: Breathe new life into old presets with our upgraded model!

- Loop-B-Gone: Say goodbye to pesky loops and hello to smooth generation.

- Natural Chats: Engage in more natural sounding conversations with our LLM than ever before.

Released under Apache 2.0, because sharing is caring!

Try it out: InferenceIllusionist/MilkDropLM-32b-v0.3

Shoutout to @superwatermelon for his invaluable insights and collab, and to all those courageous members in the community that have tested and provided feedback before!
reacted to ehristoforu's post with πŸ€— 1 day ago
view post
Post
2646
βœ’οΈ Ultraset - all-in-one dataset for SFT training in Alpaca format.
fluently-sets/ultraset

❓ Ultraset is a comprehensive dataset for training Large Language Models (LLMs) using the SFT (instruction-based Fine-Tuning) method. This dataset consists of over 785 thousand entries in eight languages, including English, Russian, French, Italian, Spanish, German, Chinese, and Korean.

🀯 Ultraset solves the problem faced by users when selecting an appropriate dataset for LLM training. It combines various types of data required to enhance the model's skills in areas such as text writing and editing, mathematics, coding, biology, medicine, finance, and multilingualism.

πŸ€— For effective use of the dataset, it is recommended to utilize only the "instruction," "input," and "output" columns and train the model for 1-3 epochs. The dataset does not include DPO or Instruct data, making it suitable for training various types of LLM models.

❇️ Ultraset is an excellent tool to improve your language model's skills in diverse knowledge areas.
reacted to aaditya's post with πŸ”₯ 2 days ago
view post
Post
3019
Last Week in Medical AI: Top Research Papers/Models πŸ”₯
πŸ… (December 15 – December 21, 2024)

Medical LLM & Other Models
- MedMax: Mixed-Modal Biomedical Assistant
- Advanced multimodal instruction tuning
- Enhanced biomedical knowledge integration
- Comprehensive assistant capabilities
- MGH Radiology Llama 70B
- Specialized radiology focus
- State-of-the-art performance
- Enhanced report generation capabilities
- HC-LLM: Historical Radiology Reports
- Context-aware report generation
- Historical data integration
- Improved accuracy in diagnostics

Frameworks & Methods
- ReflecTool: Reflection-Aware Clinical Agents
- Process-Supervised Clinical Notes
- Federated Learning with RAG
- Query Pipeline Optimization

Benchmarks & Evaluations
- Multi-OphthaLingua
- Multilingual ophthalmology benchmark
- Focus on LMICs healthcare
- Bias assessment framework
- ACE-M3 Evaluation Framework
- Multimodal medical model testing
- Comprehensive capability assessment
- Standardized evaluation metrics

LLM Applications
- Patient-Friendly Video Reports
- Medical Video QA Systems
- Gene Ontology Annotation
- Healthcare Recommendations

Special Focus: Medical Ethics & AI
- Clinical Trust Impact Study
- Mental Health AI Challenges
- Hospital Monitoring Ethics
- Radiology AI Integration

Now you can watch and listen to the latest Medical AI papers daily on our YouTube and Spotify channels as well!

- Full thread in detail:
https://x.com/OpenlifesciAI/status/1870504774162063760
- Youtube Link: youtu.be/SbFp4fnuxjo
- Spotify: https://t.co/QPmdrXuWP9
reacted to luigi12345's post with πŸ‘€ 2 days ago
view post
Post
2492
PERFECT FINAL PROMPT for Coding and Debugging.
Step 1: Generate the prompt that if sent to you will make you adjust the script so it meets each and every of the criteria it needs to meet to be 100% bug free and perfect.

Step 2: adjust the script following the steps and instructions in the prompt created in Step 1.

  • 1 reply
Β·
reacted to prithivMLmods's post with πŸ€— 2 days ago
reacted to wenhuach's post with πŸ‘ 2 days ago
posted an update 2 days ago
view post
Post
2111
πŸ“’ So far I noticed that 🧠 reasoning with llm πŸ€– in English is tend to be more accurate than in other languages.
However, besides the GoogleTrans and other open transparent translators, I could not find one that could be easy to use solutions to avoid:
1.πŸ”΄ Third-party framework installation
2.πŸ”΄ Text chunking
3.πŸ”΄ support of meta-annotation like spans / objects / etc.

πŸ’Ž To cope problem of IR from non-english texts, I am happy to share the bulk-translate 0.25.0. 🎊

⭐ https://github.com/nicolay-r/bulk-translate

bulk-translate is a tiny Python 🐍 no-string framework that allows translate series of texts with the pre-annotated fixed-spans that are invariant for translator.

It supports πŸ‘¨β€πŸ’» API for quick data translation with (optionaly) annotated objects in texts (see figure below) in Python 🐍
I make it accessible as much as possible for RAG and / or LLM-powered app downstreams:
πŸ“˜ https://github.com/nicolay-r/bulk-translate/wiki

All you have to do is to provide iterator of texts, where each text:
1. βœ… String object
2. βœ… List of strings and nested lists that represent spans (value + any ID data).

πŸ€– By default I provide a wrapper over googletrans which you can override with your own πŸ”₯
https://github.com/nicolay-r/bulk-translate/blob/master/models/googletrans_310a.py
posted an update 4 days ago
view post
Post
522
πŸ“’ If you're working in relation extraction / character network domain, then the following post would be relevant.
Excited to share the most recent milestone on releasing the ARElight 0.25.0 🎊

Core library: https://github.com/nicolay-r/ARElight
Server: https://github.com/nicolay-r/ARElight-server

πŸ”Ž What is ARElight? It represents Granular Viewer of Sentiments Between Entities in Massively Large Documents and Collections of Texts.
Shortly speaking, it allows to extract contexts with mentioned object pairs for the related prompting / classification.
In the slides below we illsutrate the ARElight appliation for sentiment classification between object pairs in context.

We exploit DeepPavlov NER modes + GoogleTranslate + BERT-based classifier in the demo. The bash script for launching the quick demo illustrates the application of these components.

The new update provide a series of new features:
βœ… SQlite support for storing all the extracted samples
βœ… Support of the enhanced GUI for content investigation.
βœ… Switch to external no-string projects for NER and Translator

Supplementiary materials:
πŸ“œ Paper: https://link.springer.com/chapter/10.1007/978-3-031-56069-9_23
posted an update 11 days ago
view post
Post
1922
πŸ“’For those who wish to quick start with reasoning / cot application over rows of tabular data but with minimal dependencies, this post would be valuable.

πŸ”Ž I found that the problem is that given a bulk of Chain-of-Though (CoT) πŸ”— queries for remotely accessed LLM πŸ€– (like openrouter / Replicate / OpenAI) might result in connection loss which may lead exception πŸ’₯ and challenges with generated content restoration.

Here, is where I contribute with the bulk-chain.
⭐ https://github.com/nicolay-r/bulk-chain

Currently working on 0.24.3 version, in which I am happy to announce the API for developing your apps that are based on CoT schema declaration in JSON (details in attached images πŸ“Έ)

All you have to do is:
βœ… 1. Declare CoT-schema in json
βœ… 2. Declare the model or use the preset
βœ… 3. Launch code

One example is to use ReplicateIO provider:
https://github.com/nicolay-r/bulk-chain/blob/master/ext/replicate.py

Each model has a wrapped call for inference in try-catch block
upvoted an article 11 days ago
view article
Article

Reverse Thinking Makes LLMs Stronger Reasoners

By mikelabs β€’
β€’ 2
posted an update 18 days ago
view post
Post
454
If you're coming towards Information Retrieval with pre-processing techniques for LLM, this post might be relevant.

Excited to share of releasing a new 0.25.1 version of the AREkit library! πŸŽ‰πŸ₯³πŸŽŠπŸŽ

AREkit represent an NLP toolkit of components for deep understanding textual narratives through the extraction of inner relations via various techniqes, including machine learning techniques. This toolkit is helpful if you wish to structure your dataset for IR problem. It allows you to turn your narratives into structured datasets of mentioned relations in sentences (sampling).

In the era of GenAI world, AREkit contributes with no-string NLP pipelines and related elements for building your own NLP workflow with any thirdparty ML / LLM / API you wish.

🌟 https://github.com/nicolay-r/AREkit/releases/tag/v0.25.1-rc

In 0.25.1, the following steps were made towards it:
1. βœ… Native batching support for pipelines
2. πŸ“¦ Formed thirdparty projects for several text-preprocessing elements:
bulk-translate with GoogleTranslate or any other you wish: https://github.com/nicolay-r/bulk-translate
bulk-ner for NER with DeepPavlov models or any other you wish: https://github.com/nicolay-r/bulk-ner
bulk-chain for reasoning with any LLM you wish: https://github.com/nicolay-r/bulk-chain
* (soon support for AREkit)
3. ❌ Removed convential neural network related components

πŸ“Ί One of the demo is ARElight which repsent a granular viewer / GUI for network-based representation of infromation extracted from narratives:
ARElight: https://github.com/nicolay-r/ARElight
posted an update 25 days ago
view post
Post
671
πŸ“’ If you're aimed at processing spreadsheet data with LLM Chain-of-Thought technique, then this update might be valuable for you πŸ’Ž

The updated πŸ“¦ bulk-chain-0.24.2 which is aimed at iterative processing of CSV/JSONL data with no-string dependencies from third party LLM frameworks is out πŸŽ‰

πŸ“¦: https://pypi.org/project/bulk-chain/0.24.2/
🌟: https://github.com/nicolay-r/bulk-chain
πŸ“˜: https://github.com/nicolay-r/bulk-chain/issues/26

The key feature of bulk-chain is SQLite caching that saves your time ⏰️ and money πŸ’΅ by guarantee no-data-lost on using remote LLM providers such as OpenAI, ReplicateIO, OpenRouter, etc.

πŸ”§ This release has the following updates:
βœ… Now I am using a separater iterator tiny package source-iter
βœ… You can manually setup amount of attempts to continue in case of the lost connection.
βœ… other minor improvements.

Quick start on GoogleColab:
πŸ“™: https://colab.research.google.com/github/nicolay-r/bulk-chain/blob/master/bulk_chain_tutorial.ipynb

#reasoning #bulk #sqlite3 #chainofthought #cot #nlp #pipeline #nostrings #processing #data #dynamic #llm
posted an update about 1 month ago
view post
Post
552
πŸ“’ If you were earlier interested in quick translator application for bunch of texts with spans of fixed parts that tolerant for translation, then this post might be relevant! Delighted to share a bulk_translate -- a framework for automatic texts translation with the pre-anotated fixed spans.

πŸ“¦ https://pypi.org/project/bulk-translate/
🌟 https://github.com/nicolay-r/bulk-translate

πŸ”‘ Spans allows you to control your objects in texts, so that objects would be tollerant to translator. By default it provides implementation for GoogleTranslate.

bulk_translate features:
βœ… Native Implementation of two translation modes:
- fast-mode: exploits extra chars for grouping text parts into single batch
- accurate: pefroms individual translation of each text part.
βœ… No strings: you're free to adopt any LM / LLM backend.
Support googletrans by default.

The initial release of the project supports fixed spans as text parts wrapped in square brackets [] with non inner space characters.

You can play with your data in CSV here on GoogleColab:
πŸ“’ https://colab.research.google.com/github/nicolay-r/bulk-translate/blob/master/bulk_translate_demo.ipynb

πŸ‘ This project is based on AREkit 0.25.1 pipelines for deployment lm-based workflows:
https://github.com/nicolay-r/AREkit
reacted to merve's post with ❀️ about 1 month ago
view post
Post
3100
your hugging face profile now has your recent activities πŸ€—