What sort of performance numbers are you seeing with llama.cpp and ik_llama?

#2
by segmond - opened

What sort of performance numbers are you seeing with llama.cpp and ik_llama? Assuming the specs you have 32gb vram and 512/700+gb sys ram

With ik_llama.cpp, for programming tasks with 32k context, I'm seeing roughly 115 tokens/second for Prompt Processing and 7-8 tokens/second for Token Generation. Not using llama.cpp much, but from my memory it's roughly 20% slower.

Keep in mind, that Kimi-K2 is still pretty new, so ik_llama may further improve the performance with custom mla code to better support this model.

With ik_llama.cpp, 32K f16 context, I managed to offload one layer to 32 GB VRAM... which improved the performance by exactly nothing :).
Anyway, here are benchmark results for Epyc 9355 + RTX 5090:

PP TG N_KV T_PP s S_PP t/s T_TG s S_TG t/s
2048 512 0 11.856 172.73 29.564 17.32
2048 512 2048 11.978 170.98 29.643 17.27
2048 512 4096 12.277 166.81 30.050 17.04
2048 512 6144 12.052 169.92 29.936 17.10
2048 512 8192 13.077 156.62 31.023 16.50
2048 512 10240 12.377 165.47 30.881 16.58
2048 512 12288 13.642 150.12 31.459 16.28
2048 512 14336 12.842 159.47 31.181 16.42
2048 512 16384 14.320 143.01 31.280 16.37
2048 512 18432 13.583 150.77 31.136 16.44
2048 512 20480 13.586 150.75 31.514 16.25
2048 512 22528 13.917 147.16 31.592 16.21
2048 512 24576 14.110 145.15 31.687 16.16
2048 512 26624 14.673 139.57 31.622 16.19
2048 512 28672 16.288 125.74 32.000 16.00
2048 512 30720 14.446 141.77 32.171 15.91

Thank you @anikifoss .

@sousekd these are amazing numbers, thanks for sharing! How many memory channels do you have in your Epyc 9355 system?

@anikifoss It's 12 channels of DDR5-6400. Depending on the server's mood, the OCCT memory benchmark reports around 600 GB/s read and 420 GB/s write bandwidth. The CPU itself isn’t even that expensive. Unfortunately, the same can't be said for the RAM modules :).

My hope is that, eventually, inference engines will become more NUMA-aware, so that adding a second CPU with its own set of RAM would allow me to run Q8 quantizations of these large models. But we might get entirely new architectures with shared memory between the CPU and GPU before that happens...

That's pretty high RAM bandwidth, I only get about 280GB/sec due to Threadripper's limited CCDs (only 4 CCDs on my CPU). With dual socket and higher clocks, it should be possible to hit 30 t/s with kimi-k2. That would be pretty amazing!

only 4 CCDs on my CPU

Yeah, I still don't understand the impact of CCDs on memory performance. I've seen people on various forums recommending as many CCDs as possible, but at the same time, I’ve read that cross-CCD bandwidth is limited. Based on that, I’d assume fewer CCDs might result in less cross-CCD memory access...

Anyway, here is a useful chart showing CCD layouts for anyone considering EPYC, and here is a detailed explanation of its memory architecture.

Sign up or log in to comment