Csaba  Kecskemeti's picture

Csaba Kecskemeti PRO

csabakecskemeti

AI & ML interests

None yet

Recent Activity

updated a model about 11 hours ago
DevQuasar/Writer.palmyra-med-20b-GGUF
published a model about 11 hours ago
DevQuasar/Writer.palmyra-med-20b-GGUF
updated a model about 12 hours ago
DevQuasar/Writer.palmyra-20b-chat-GGUF
View all activity

Organizations

Zillow's profile picture DevQuasar's profile picture Hugging Face Party @ PyTorch Conference's profile picture Intelligent Estate's profile picture open/ acc's profile picture Hugging Face MCP Course's profile picture

csabakecskemeti's activity

posted an update 2 days ago
posted an update about 2 months ago
posted an update about 2 months ago
posted an update 2 months ago
view post
Post
3371
I'm collecting llama-bench results for inference with a llama 3.1 8B q4 and q8 reference models on varoius GPUs. The results are average of 5 executions.
The system varies (different motherboard and CPU ... but that probably that has little effect on the inference performance).

https://devquasar.com/gpu-gguf-inference-comparison/
the exact models user are in the page

I'd welcome results from other GPUs is you have access do anything else you've need in the post. Hopefully this is useful information everyone.
posted an update 2 months ago
view post
Post
2392
Managed to get my hands on a 5090FE, it's beefy

| llama 8B Q8_0 | 7.95 GiB | 8.03 B | CUDA | 99 | pp512 | 12207.44 ยฑ 481.67 |
| llama 8B Q8_0 | 7.95 GiB | 8.03 B | CUDA | 99 | tg128 | 143.18 ยฑ 0.18 |

Comparison with others GPUs
http://devquasar.com/gpu-gguf-inference-comparison/
replied to their post 2 months ago
view reply

Follow-up

With the smaller context length dataset the training has succeeded.

posted an update 2 months ago
reacted to clem's post with ๐Ÿš€ 3 months ago
view post
Post
4684
We just crossed 1,500,000 public models on Hugging Face (and 500k spaces, 330k datasets, 50k papers). One new repository is created every 15 seconds. Congratulations all!
ยท
posted an update 3 months ago
replied to their post 3 months ago
view reply

No success so far, the training data contains some larger contexts and it fails just before complete the first epoch.
(dataset: DevQuasar/brainstorm-v3.1_vicnua_1k)

If anyone has further suggestion to the bnb config (with ROCm on MI100)?
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)

Now testing with my other dataset that is smaller seems I have a lower memory need
DevQuasar/brainstorm_vicuna_1k

replied to their post 3 months ago
view reply

It's failed by the morning, need to find more room to decrease the memory

replied to their post 3 months ago
view reply

The machine itself is also funny. This my my GPU test bench.
Now also testing the PWM fan control and jetkvm

IMG_7216.jpg

posted an update 3 months ago
view post
Post
833
Fine tuning on the edge. Pushing the MI100 to it's limits.
QWQ-32B 4bit QLORA fine tuning
VRAM usage 31.498G/31.984G :D

  • 4 replies
ยท
replied to their post 3 months ago
replied to their post 3 months ago
view reply

Updated the post with GGUF (Q4,Q8) performance metrics

replied to their post 3 months ago
view reply

Good callout will add this evening
Llama 3 8b q8 was around 80t/s generation

posted an update 3 months ago
view post
Post
1978
-UPDATED-
4bit inference is working! The blogpost is updated with code snippet and requirements.txt
https://devquasar.com/uncategorized/all-about-amd-and-rocm/
-UPDATED-
I've played around with an MI100 and ROCm and collected my experience in a blogpost:
https://devquasar.com/uncategorized/all-about-amd-and-rocm/
Unfortunately I've could not make inference or training work with model loaded in 8bit or use BnB, but did everything else and documented my findings.
  • 4 replies
ยท
replied to their post 3 months ago
replied to their post 3 months ago
view reply

So far I'm managed to have a working bnb up:

(bnbtest) kecso@gpu-testbench2:~/bitsandbytes/examples$ python -m bitsandbytes
g++ (Ubuntu 14.2.0-4ubuntu2) 14.2.0
Copyright (C) 2024 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++ BUG REPORT INFORMATION ++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++ OTHER +++++++++++++++++++++++++++
ROCm specs: rocm_version_string='63', rocm_version_tuple=(6, 3)
PyTorch settings found: ROCM_VERSION=63
The directory listed in your path is found to be non-existent: local/gpu-testbench2
The directory listed in your path is found to be non-existent: @/tmp/.ICE-unix/2803,unix/gpu-testbench2
The directory listed in your path is found to be non-existent: /etc/xdg/xdg-ubuntu
The directory listed in your path is found to be non-existent: /org/gnome/Terminal/screen/6bd83ab2_fd9f_4990_876a_527ef8117ef6
The directory listed in your path is found to be non-existent: //debuginfod.ubuntu.com
WARNING! ROCm runtime files not found in any environmental path.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Checking that the library is importable and ROCm is callable...
SUCCESS!
Installation was successful!

It's able to load the model to vram, but inference fails:
Exception: cublasLt ran into an error!

This is the main problem with anything not NVIDIA. The software is painful!
Keep trying...

reacted to stefan-it's post with ๐Ÿ‘ 3 months ago
view post
Post
5149
She arrived ๐Ÿ˜

[Expect more models soon...]
  • 2 replies
ยท