( layer_name: str ) → Callable
Decorator factory that makes a layer extensible using the specified layer name.
This is a decorator factory that returns a decorator which prepares a layer class to use kernels from the Hugging Face Hub.
Example:
import torch
import torch.nn as nn
from kernels import use_kernel_forward_from_hub, kernelize
@use_kernel_forward_from_hub("MyCustomLayer")
class MyCustomLayer(nn.Module):
def __init__(self, hidden_size):
super().__init__()
self.hidden_size = hidden_size
def forward(self, x: torch.Tensor):
# original implementation
return x
model = MyCustomLayer(768)
# The layer can now be kernelized:
# model = kernelize(model, device="cuda")Function that prepares a layer class to use kernels from the Hugging Face Hub.
It is recommended to use use_kernel_forward_from_hub() decorator instead.
This function should only be used as a last resort to extend third-party layers,
it is inherently fragile since the member variables and forward signature
of usch a layer can change.
( mapping: Dict[str, Dict[Union[Device, str], Union[LayerRepositoryProtocol, Dict[Mode, LayerRepositoryProtocol]]]] inherit_mapping: bool = True )
Parameters
Dict[str, Dict[Union[Device, str], Union[LayerRepositoryProtocol, Dict[Mode, LayerRepositoryProtocol]]]]) —
The kernel mapping to apply. Maps layer names to device-specific kernel configurations. bool, optional, defaults to True) —
When True, the current mapping will be extended by mapping inside the context. When False,
only mapping is used inside the context. Context manager that sets a kernel mapping for the duration of the context.
This function allows temporary kernel mappings to be applied within a specific context, enabling different kernel configurations for different parts of your code.
Example:
import torch
import torch.nn as nn
from torch.nn import functional as F
from kernels import use_kernel_forward_from_hub
from kernels import use_kernel_mapping, LayerRepository, Device
from kernels import kernelize
# Define a mapping
mapping = {
"SiluAndMul": {
"cuda": LayerRepository(
repo_id="kernels-community/activation",
layer_name="SiluAndMul",
)
}
}
@use_kernel_forward_from_hub("SiluAndMul")
class SiluAndMul(nn.Module):
def forward(self, x: torch.Tensor) -> torch.Tensor:
d = x.shape[-1] // 2
return F.silu(x[..., :d]) * x[..., d:]
model = SiluAndMul()
# Use the mapping for the duration of the context.
with use_kernel_mapping(mapping):
# kernelize uses the temporary mapping
model = kernelize(model, device="cuda")
# Outside the context, original mappings are restored( mapping: Dict[str, Dict[Union[Device, str], Union[LayerRepositoryProtocol, Dict[Mode, LayerRepositoryProtocol]]]] inherit_mapping: bool = True )
Parameters
Dict[str, Dict[Union[Device, str], Union[LayerRepositoryProtocol, Dict[Mode, LayerRepositoryProtocol]]]]) —
The kernel mapping to register globally. Maps layer names to device-specific kernels.
The mapping can specify different kernels for different modes (training, inference, etc.). bool, optional, defaults to True) —
When True, the current mapping will be extended by mapping. When False, the existing mappings
are erased before adding mapping. Register a global mapping between layer names and their corresponding kernel implementations.
This function allows you to register a mapping between a layer name and the corresponding kernel(s) to use, depending on the device and mode. This should be used in conjunction with kernelize().
Example:
from kernels import LayerRepository, register_kernel_mapping, Mode
# Simple mapping for a single kernel per device
kernel_layer_mapping = {
"LlamaRMSNorm": {
"cuda": LayerRepository(
repo_id="kernels-community/activation",
layer_name="RmsNorm",
revision="layers",
),
},
}
register_kernel_mapping(kernel_layer_mapping)
# Advanced mapping with mode-specific kernels
advanced_mapping = {
"MultiHeadAttention": {
"cuda": {
Mode.TRAINING: LayerRepository(
repo_id="username/training-kernels",
layer_name="TrainingAttention"
),
Mode.INFERENCE: LayerRepository(
repo_id="username/inference-kernels",
layer_name="FastAttention"
),
}
}
}
register_kernel_mapping(advanced_mapping)( model: 'nn.Module' mode: Mode = <Mode.TORCH_COMPILE|TRAINING: 10> device: Optional[Union[str, 'torch.device']] = None use_fallback: bool = True ) → nn.Module
Parameters
nn.Module) —
The PyTorch model to kernelize. Mode.TRAINING | Mode.TORCH_COMPILE) —
The mode that the kernel is going to be used in. For example, Mode.TRAINING | Mode.TORCH_COMPILE
kernelizes the model for training with torch.compile. Union[str, torch.device], optional) —
The device type to load kernels for. The device type will be inferred from the model parameters
when not provided. bool, optional, defaults to True) —
Whether to use the original forward method of modules when no compatible kernel could be found.
If set to False, an exception will be raised in such cases. Returns
nn.Module
The kernelized model with optimized kernel implementations.
Replace layer forward methods with optimized kernel implementations.
This function iterates over all modules in the model and replaces the forward method of extensible layers
for which kernels are registered using register_kernel_mapping() or use_kernel_mapping().
Example:
import torch
import torch.nn as nn
from kernels import kernelize, Mode, register_kernel_mapping, LayerRepository
from kernels import use_kernel_forward_from_hub
@use_kernel_forward_from_hub("SiluAndMul")
class SiluAndMul(nn.Module):
def forward(self, x: torch.Tensor) -> torch.Tensor:
d = x.shape[-1] // 2
return F.silu(x[..., :d]) * x[..., d:]
mapping = {
"LayerNorm": {
"cuda": LayerRepository(
repo_id="kernels-community/activation",
layer_name="SiluAndMul",
)
}
}
register_kernel_mapping(mapping)
# Create and kernelize a model
model = nn.Sequential(
nn.Linear(1024, 2048, device="cuda"),
SiluAndMul(),
)
# Kernelize for inference
kernelized_model = kernelize(model)( type: str properties: Optional[CUDAProperties] = None )
Represents a compute device with optional properties.
This class encapsulates device information including device type and optional device-specific properties like CUDA capabilities.
Example:
from kernels import Device, CUDAProperties
# Basic CUDA device
cuda_device = Device(type="cuda")
# CUDA device with specific capability requirements
cuda_device_with_props = Device(
type="cuda",
properties=CUDAProperties(min_capability=75, max_capability=90)
)
# MPS device for Apple Silicon
mps_device = Device(type="mps")Create an appropriate repository set for this device type.
( value names = None module = None qualname = None type = None start = 1 )
Kernelize mode
The Mode flag is used by kernelize() to select kernels for the given mode. Mappings can be registered for
specific modes.
Note:
Different modes can be combined. For instance, INFERENCE | TORCH_COMPILE should be used for layers that
are used for inference with torch.compile.
( repo_id: str layer_name: str revision: Optional[str] = None version: Optional[str] = None )
Parameters
str) —
The Hub repository containing the layer. str) —
The name of the layer within the kernel repository. str, optional, defaults to "main") —
The specific revision (branch, tag, or commit) to download. Cannot be used together with version. str, optional) —
The kernel version to download. This can be a Python version specifier, such as ">=1.0.0,<2.0.0".
Cannot be used together with revision. Repository and name of a layer for kernel mapping.
Example:
from kernels import LayerRepository
# Reference a specific layer by revision
layer_repo = LayerRepository(
repo_id="kernels-community/activation",
layer_name="SiluAndMul",
)
# Reference a layer by version constraint
layer_repo_versioned = LayerRepository(
repo_id="kernels-community/activation",
layer_name="SiluAndMul",
version=">=0.0.3,<0.1"
)