license: other
base_model:
- meta-llama/Llama-3.1-8B
KernelLLM
We introduce KernelLLM, a large language model, based on Llama 3.1, which has been trained specfically for the task of writing kernels. This is in collaboration with Project Popcorn.
Model Use
To use this model, please make sure to install transformers:
pip install transformers accelerate
The code below demonstrates default capabilities. You may need to set the HuggingFace access token - see (https://huggingface.co/docs/hub/security-tokens).
from transformers import AutoTokenizer
import transformers
import torch
model = "facebook/KernelLLM"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
prompt = "import torch"
response = pipeline(
prompt,
do_sample=True,
top_k=2,
temperature=0.1,
top_p=0.95,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_length=100,
truncation=True,
)[0]
print(prompt, response, join="")
Model Details
Model Developers Meta.
Input Models input text only.
Output Models generate text only.
Model Architecture KernelLLM is an auto-regressive language model that uses an optimized transformer architecture.
Model Dates KernelLLM was been trained in March 2025.
Status This is a static model trained on an offline dataset.
License See LICENSE.pdf for details.
Intended Use
Intended Use Cases KernelLLM is intended for commercial and research use in English, relevant programming languages, Python, and Triton.
Out-of-Scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for KernelLLM and its variants.
Hardware and Software
Training Factors We used custom training libraries.
Carbon Footprint In aggregate, training KernelLLM required 250 hours of computation on hardware of type A100-80GB (TDP of 350-400W), not including the training of the base model. 100% of the estimated tCO2eq emissions were offset by Meta’s sustainability program.
Ethical Considerations and Limitations
KernelLLM and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, KernelLLMs’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of KernelLLM, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available available at https://ai.meta.com/llama/responsible-use-guide.