Zacharias030 commited on
Commit
ff49b9c
·
verified ·
1 Parent(s): 428cf76

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -12
README.md CHANGED
@@ -8,23 +8,25 @@ datasets:
8
 
9
  # KernelLLM
10
  ![scatter performance comparison plot](media/llm_performance_comparison.png)
11
- On KernelBench-Triton Level 1, our 8B parameter model matches GPT-4o in single-shot performance. With multiple inferences, KernelLLM's performance matches DeepSeek R1. This is all from a model with two orders of magnitude fewer parameters than its competitors.
12
- ## Making Kernel Development more accessible with KernelLLM
13
 
14
- We introduce KernelLLM, a large language model based on Llama 3.1, which has been trained specifically for the task of authoring GPU kernels using Triton. KernelLLM translates PyTorch modules into Triton kernels and was evaluated on KernelBench-Triton (see [here](https://github.com/ScalingIntelligence/KernelBench/pull/35)).
15
 
16
- KernelLLM's vision is to meet the growing demand for high-performance GPU kernels by automating the generation of efficient Triton implementations. As workloads grow larger and more diverse accelerator architectures emerge, the need for tailored kernel solutions has increased significantly. Although a number of [works](https://metr.org/blog/2025-02-14-measuring-automated-kernel-engineering/) [exist](https://cognition.ai/blog/kevin-32b), most of them are limited to [test-time](https://sakana.ai/ai-cuda-engineer/) [optimization](https://developer.nvidia.com/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/), while some tune on solution traced of KernelBench problems itself. To the best of our knowledge KernelLLM is the first LLM finetuned on external (torch, triton) pairs, and we hope that making our model available can accelerate progress.
17
  KernelLLM aims to democratize GPU programming by making kernel development more accessible and efficient.
18
 
 
 
 
19
 
20
  ![alt text](media/triton-kernel-workflow.png)
21
 
22
- *KernelLLM Workflow for Triton Kernel Generation Our approach uses KernelLLM to translate PyTorch code (green) into Triton kernel candidates. Input and output components are marked in bold. The generations are validated against unit tests, which run kernels with random inputs of known shapes. This workflow allows us to evaluate multiple generations (pass@k) by increasing the number of kernel candidate generations. The best kernel implementation is selected and returned (green output).*
23
 
24
 
25
- The model was trained on approximately 25,000 paired examples of PyTorch modules and their equivalent Triton kernel implementations and additional synthetically generated samples. Our approach combines filtered code from TheStack [Kocetkov et al. 2022] and synthetic examples generated through torch.compile() and additional prompting techniques. The filtered and compiled dataset can be found [on Huggingface](https://huggingface.co/datasets/GPUMODE/Inductor_Created_Data_Permissive).
26
 
27
- We finetuned Llama3.1-8B-Instruct on the created dataset using supervised instruction tuning and measured its ability to generate correct Triton kernels and calling code on KernelBench-Triton, our newly created variant of KernelBench [Ouyang et al. 2025] targeting Triton kernel generation. The torch code was used with a prompt template containing a format example as instruction during both training and evaluation. The model was trained for 10 epochs with a batch size of 32 and a standard SFT recipe with hyperparameters selected by perplexity on a held-out subset of the data. Training took circa 12 hours wall clock time on 16 GPUs (192 GPU hours), and we report the best validation results obtained.
28
 
29
  ### Model Performance
30
 
@@ -45,9 +47,10 @@ We finetuned Llama3.1-8B-Instruct on the created dataset using supervised instru
45
  | Llama R1 Distill | 70 | 11 | reasoning |
46
  | DeepSeek R1 | 671 | 30 | 1 |
47
 
48
- Our 8B parameter model achieves competitive or superior performance compared to much larger models on kernel generation tasks, demonstrating the effectiveness of our specialized training approach.
49
 
50
  The resulting model is competitive with state of the art LLMs despite its small size. We evaluate our model on KernelBench which is an open-source benchmark to evaluate the ability of LLMs to write efficient GPU kernels. It contains 250 selected PyTorch modules organized into difficulty levels, from single torch operators such as Conv2D or Swish (level 1), to full model architectures (level 3). The benchmark measures both correctness (by comparing against reference PyTorch outputs) and performance (by measuring speedup over baseline implementations). We implemented a new KernelBench-Triton variant that evaluates an LLMs ability to generate Triton kernels, making it an ideal benchmark for evaluating KernelLLM's capabilities. All our measurements were done on Nvidia H100 GPUs.
 
51
  ![pass at k analysis plot](media/kernelllm_pass_at_k_scaling.png)
52
  *KernelLLM shows quasi log-linear scaling behavior during pass@k analysis.*
53
 
@@ -134,9 +137,9 @@ raw_output = model.generate_raw("Your prompt here", temperature=1.0, max_new_tok
134
 
135
  Despite showing promising results, KernelLLM has several limitations:
136
 
137
- - The model may still produce incorrect API references and syntax errors
138
- - Generated code structurally resembles compiler-generated output
139
- - Error analysis shows common issues related to tensor shapes, type handling, and numerical precision
140
 
141
  ## Model Details
142
 
@@ -164,7 +167,7 @@ Despite showing promising results, KernelLLM has several limitations:
164
 
165
  **Training Factors:** We used custom training libraries.
166
 
167
- **Carbon Footprint:** In aggregate, training KernelLLM required 250 hours of computation on hardware of type A100-80GB (TDP of 350-400W), not including the training of the base model. 100% of the estimated tCO2eq emissions were offset by Meta's sustainability program.
168
 
169
  ## Ethical Considerations and Limitations
170
 
 
8
 
9
  # KernelLLM
10
  ![scatter performance comparison plot](media/llm_performance_comparison.png)
11
+ *On KernelBench-Triton Level 1, our 8B parameter model exceeds models such as GPT-4o and DeepSeek V3 in single-shot performance. With multiple inferences, KernelLLM's performance outperforms DeepSeek R1. This is all from a model with two orders of magnitude fewer parameters than its competitors.*
 
12
 
13
+ ## Making Kernel Development more accessible with KernelLLM
14
 
15
+ We introduce KernelLLM, a large language model based on Llama 3.1 Instruct, which has been trained specifically for the task of authoring GPU kernels using Triton. KernelLLM translates PyTorch modules into Triton kernels and was evaluated on KernelBench-Triton (see [here](https://github.com/ScalingIntelligence/KernelBench/pull/35)).
16
  KernelLLM aims to democratize GPU programming by making kernel development more accessible and efficient.
17
 
18
+ KernelLLM's vision is to meet the growing demand for high-performance GPU kernels by automating the generation of efficient Triton implementations. As workloads grow larger and more diverse accelerator architectures emerge, the need for tailored kernel solutions has increased significantly. Although a number of [works](https://metr.org/blog/2025-02-14-measuring-automated-kernel-engineering/) [exist](https://cognition.ai/blog/kevin-32b), most of them are limited to [test-time](https://sakana.ai/ai-cuda-engineer/) [optimization](https://developer.nvidia.com/blog/automating-gpu-kernel-generation-with-deepseek-r1-and-inference-time-scaling/), while others tune on solutions traced of KernelBench problems itself, thereby limiting the informativeness of the results towards out-of-distribution generalization. To the best of our knowledge KernelLLM is the first LLM finetuned on external (torch, triton) pairs, and we hope that making our model available can accelerate progress towards intelligent kernel authoring systems.
19
+
20
+
21
 
22
  ![alt text](media/triton-kernel-workflow.png)
23
 
24
+ *KernelLLM Workflow for Triton Kernel Generation: Our approach uses KernelLLM to translate PyTorch code (green) into Triton kernel candidates. Input and output components are marked in bold. The generations are validated against unit tests, which run kernels with random inputs of known shapes. This workflow allows us to evaluate multiple generations (pass@k) by increasing the number of kernel candidate generations. The best kernel implementation is selected and returned (green output).*
25
 
26
 
27
+ The model was trained on approximately 25,000 paired examples of PyTorch modules and their equivalent Triton kernel implementations, and additional synthetically generated samples. Our approach combines filtered code from TheStack [Kocetkov et al. 2022] and synthetic examples generated through `torch.compile()` and additional prompting techniques. The filtered and compiled dataset can be found [on Huggingface](https://huggingface.co/datasets/GPUMODE/Inductor_Created_Data_Permissive).
28
 
29
+ We finetuned Llama3.1-8B-Instruct on the created dataset using supervised instruction tuning and measured its ability to generate correct Triton kernels and corresponding calling code on KernelBench-Triton, our newly created variant of KernelBench [Ouyang et al. 2025] targeting Triton kernel generation. The torch code was used with a prompt template containing a format example as instruction during both training and evaluation. The model was trained for 10 epochs with a batch size of 32 and a standard SFT recipe with hyperparameters selected by perplexity on a held-out subset of the training data. Training took circa 12 hours wall clock time on 16 GPUs (192 GPU hours), and we report the best checkpoint's validation results.
30
 
31
  ### Model Performance
32
 
 
47
  | Llama R1 Distill | 70 | 11 | reasoning |
48
  | DeepSeek R1 | 671 | 30 | 1 |
49
 
50
+ *Our 8B parameter model achieves competitive or superior performance compared to much larger models on kernel generation tasks, demonstrating the effectiveness of our specialized training approach on KernelBench Level 1 versus various baselines. KernelLLM inference was run with temperature=1.0 and top_p=0.97.*
51
 
52
  The resulting model is competitive with state of the art LLMs despite its small size. We evaluate our model on KernelBench which is an open-source benchmark to evaluate the ability of LLMs to write efficient GPU kernels. It contains 250 selected PyTorch modules organized into difficulty levels, from single torch operators such as Conv2D or Swish (level 1), to full model architectures (level 3). The benchmark measures both correctness (by comparing against reference PyTorch outputs) and performance (by measuring speedup over baseline implementations). We implemented a new KernelBench-Triton variant that evaluates an LLMs ability to generate Triton kernels, making it an ideal benchmark for evaluating KernelLLM's capabilities. All our measurements were done on Nvidia H100 GPUs.
53
+
54
  ![pass at k analysis plot](media/kernelllm_pass_at_k_scaling.png)
55
  *KernelLLM shows quasi log-linear scaling behavior during pass@k analysis.*
56
 
 
137
 
138
  Despite showing promising results, KernelLLM has several limitations:
139
 
140
+ - The model may still produce incorrect API references and syntax errors, and is limited in its instruction following ability.
141
+ - Generated code structurally resembles compiler-generated output, and the model often fails to implement a meaningful kernel.
142
+ - Error analysis shows common issues related to instruction following with respect to variable naming, tensor shapes, type handling, and numerical precision.
143
 
144
  ## Model Details
145
 
 
167
 
168
  **Training Factors:** We used custom training libraries.
169
 
170
+ **Carbon Footprint:** In aggregate, training KernelLLM required 250 hours of computation on hardware of type H100-80GB, not including the training of the base model. 100% of the estimated tCO2eq emissions were offset by Meta's sustainability program.
171
 
172
  ## Ethical Considerations and Limitations
173