File size: 3,151 Bytes
3544c26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
license: other
base_model:
- meta-llama/Llama-3.1-8B
---

# KernelLLM

We introduce KernelLLM, a large language model, based on Llama 3.1, which has been trained specfically for the task of writing kernels.
This is in collaboration with [Project Popcorn](https://gpu-mode.github.io/popcorn/).

## Model Use

To use this model, please make sure to install transformers:

```bash
pip install transformers accelerate
```

The code below demonstrates default capabilities. You may need to set the HuggingFace access token - see (https://huggingface.co/docs/hub/security-tokens).

```python
from transformers import AutoTokenizer
import transformers
import torch

model = "facebook/KernelLLM"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

prompt = "import torch"

response = pipeline(
    prompt,
    do_sample=True,
    top_k=2,
    temperature=0.1,
    top_p=0.95,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=100,
    truncation=True,
)[0]
print(prompt, response, join="")
```

## Model Details

**Model Developers** Meta.

**Input** Models input text only.

**Output** Models generate text only.

**Model Architecture** KernelLLM is an auto-regressive language model that uses an optimized transformer architecture.

**Model Dates** KernelLLM was been trained in March 2025.

**Status** This is a static model trained on an offline dataset. 

**License** See LICENSE.pdf for details.

## Intended Use

**Intended Use Cases** KernelLLM is intended for commercial and research use in English, relevant programming languages, Python, and Triton.

**Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the [Acceptable Use Policy](https://llama.meta.com/llama3/use-policy) and Licensing Agreement for KernelLLM and its variants.

## Hardware and Software

**Training Factors** We used custom training libraries.

**Carbon Footprint** In aggregate, training KernelLLM required 250 hours of computation on hardware of type A100-80GB (TDP of 350-400W), not including the training of the base model. 100% of the estimated tCO2eq emissions were offset by Meta’s sustainability program.

## Ethical Considerations and Limitations

KernelLLM and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, KernelLLMs’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of KernelLLM, developers should perform safety testing and tuning tailored to their specific applications of the model.

Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-use-guide](https://ai.meta.com/llama/responsible-use-guide).