Ashish Tanwer
ashishtanwer
AI & ML interests
None yet
Recent Activity
liked a dataset 8 days ago
nohurry/Opus-4.6-Reasoning-3000x-filtered liked a model 8 days ago
RekaAI/reka-edge-2603 liked a model 8 days ago
Crownelius/Crow-9B-HERETIC-4.6Organizations
RAG
DataLabelling
LLM
- Running3.17k
AnyCoder
π3.17kGenerate code instantly from natural language prompts
- RunningFeatured272
Qwen2.5 Coder Artifacts
π’272Generate and preview web app code from a text description
- RunningFeatured922
QwQ-32B-Preview
π922QwQ-32B-Preview
- Running on CPU Upgrade13.9k
Open LLM Leaderboard
π13.9kTrack, rank and evaluate open LLMs and chatbots
Evals
ClassicalML
Paper and resources for Classical ML
InfraML
Agents
Transformer
-
sentence-transformers/all-mpnet-base-v2
Sentence Similarity β’ 0.1B β’ Updated β’ 28.1M β’ β’ 1.26k -
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 17 -
google-t5/t5-base
Translation β’ Updated β’ 1.96M β’ β’ 770 -
Attention Is All You Need
Paper β’ 1706.03762 β’ Published β’ 118
DataCleaning
Dataset
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper β’ 2306.01116 β’ Published β’ 43 -
HuggingFaceFW/fineweb
Viewer β’ Updated β’ 52.5B β’ 196k β’ 2.71k -
tiiuae/falcon-refinedweb
Viewer β’ Updated β’ 968M β’ 14.9k β’ 897 -
LLaMA: Open and Efficient Foundation Language Models
Paper β’ 2302.13971 β’ Published β’ 22
Training
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 17 -
AutoTrain: No-code training for state-of-the-art models
Paper β’ 2410.15735 β’ Published β’ 59 -
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Paper β’ 2405.00732 β’ Published β’ 122 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper β’ 2106.09685 β’ Published β’ 60
Diffusion
DataCrawling
Agents
RAG
Transformer
-
sentence-transformers/all-mpnet-base-v2
Sentence Similarity β’ 0.1B β’ Updated β’ 28.1M β’ β’ 1.26k -
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 17 -
google-t5/t5-base
Translation β’ Updated β’ 1.96M β’ β’ 770 -
Attention Is All You Need
Paper β’ 1706.03762 β’ Published β’ 118
DataLabelling
DataCleaning
LLM
- Running3.17k
AnyCoder
π3.17kGenerate code instantly from natural language prompts
- RunningFeatured272
Qwen2.5 Coder Artifacts
π’272Generate and preview web app code from a text description
- RunningFeatured922
QwQ-32B-Preview
π922QwQ-32B-Preview
- Running on CPU Upgrade13.9k
Open LLM Leaderboard
π13.9kTrack, rank and evaluate open LLMs and chatbots
Dataset
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper β’ 2306.01116 β’ Published β’ 43 -
HuggingFaceFW/fineweb
Viewer β’ Updated β’ 52.5B β’ 196k β’ 2.71k -
tiiuae/falcon-refinedweb
Viewer β’ Updated β’ 968M β’ 14.9k β’ 897 -
LLaMA: Open and Efficient Foundation Language Models
Paper β’ 2302.13971 β’ Published β’ 22
Evals
Training
-
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Paper β’ 1910.10683 β’ Published β’ 17 -
AutoTrain: No-code training for state-of-the-art models
Paper β’ 2410.15735 β’ Published β’ 59 -
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Paper β’ 2405.00732 β’ Published β’ 122 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper β’ 2106.09685 β’ Published β’ 60
ClassicalML
Paper and resources for Classical ML
Diffusion
InfraML