Fahadh
fahadh4ilyas
AI & ML interests
None yet
Recent Activity
updated
a model
about 1 month ago
fahadh4ilyas/llama3.2-11B-Vision-Instruct-INT4-GPTQ
updated
a model
about 1 month ago
fahadh4ilyas/Llama-4-Scout-17B-16E-Instruct-Quantizable
updated
a model
about 1 month ago
fahadh4ilyas/Llama-4-Scout-17B-16E-Instruct-FP8
Organizations
None yet
fahadh4ilyas's activity
How to load this model?
4
#2 opened about 1 month ago
by
lechga
Why the example prompt doesn't include prompt format?
3
#8 opened 8 months ago
by
fahadh4ilyas
What is the max. content length of Mistral-7B-Instruct-v0.2?
17
#43 opened over 1 year ago
by
hanshupe
The fused expert parameters means load_in_4bit doesn't work properly, nor does LoRA
π
π§
7
31
#10 opened about 1 year ago
by
tdrussell
Ready for Testing...
9
#1 opened about 1 year ago
by
Qubitium

Failing to 4-bit quantize with BitsAndBytes
1
#16 opened about 1 year ago
by
simsim314
Target modules {'out_proj', 'Wqkv'} is not found in the phi-2 model how can I fix this error?
2
#115 opened about 1 year ago
by
roy1109
Why this model kept generating \n when loaded with text generation web ui?
4
#2 opened over 1 year ago
by
fahadh4ilyas
The `main` branch for TheBloke/Llama-2-70B-GPTQ appears borked
π
1
11
#3 opened almost 2 years ago
by
Aivean
Does max_position_embeddings really the parameter to be changed?
3
#1 opened almost 2 years ago
by
fahadh4ilyas
How do you Quantize the model?
2
#3 opened almost 2 years ago
by
fahadh4ilyas