Quan Nguyen PRO
qnguyen3
AI & ML interests
None yet
Recent Activity
liked
a model
about 9 hours ago
Tongyi-Zhiwen/QwenLong-CPRS-7B
liked
a model
5 days ago
showlab/OmniConsistency
updated
a model
6 days ago
mlx-community/colnomic-embed-multimodal-3b-8bit
Organizations
qnguyen3's activity
how to use onnx
#17 opened 8 days ago
by
qnguyen3

VSCODE + Cline + Ollama + Qwen2.5-Coder-32B-Instruct.Q8_0
3
#20 opened 6 months ago
by
BigDeeper
Adding Evaluation Results
#2 opened 8 months ago
by
leaderboard-pr-bot

Open LLM Leaderboard results
1
#3 opened 8 months ago
by
SaisExperiments

thank you for making quants
1
#1 opened 8 months ago
by
qnguyen3

Evaluate output results
1
#3 opened 8 months ago
by
Quy1004
Why dataset tag?
7
#1 opened 9 months ago
by
rombodawg

Transformers doesn't support it yet?
➕
2
6
#2 opened 11 months ago
by
mahiatlinux
Missing configuration_llava_qwen2.py and configuration_llava_qwen2.py ??
1
#1 opened 11 months ago
by
nicolollo
Handling `flash_attn` Dependency for Non-GPU Environments
❤️
👍
10
20
#4 opened 11 months ago
by
giacomopedemonte
This model is amazing!
👍
1
3
#1 opened 11 months ago
by
nicolollo
Leaderboard
1
#6 opened 11 months ago
by
Stark2008

Multi-round conversation w/ PKV cache example code
4
#5 opened about 1 year ago
by
Xenova

vilm/VinaLlama2-14B-arxiv vs vilm/VinaLlama2-14B
1
#1 opened about 1 year ago
by
anhnh2002

Approach to reduce hallucination
8
#1 opened about 1 year ago
by
LoneRanger44
Gặp vấn đề khi finetune
2
#2 opened over 1 year ago
by
104-wonohfor
Run on Macbook without flash_attn?
2
#1 opened about 1 year ago
by
palebluewanders
Safetensor version
2
#3 opened about 1 year ago
by
anhnh2002
