Datasets:
Dataset Viewer
id
int64 398
463
| name
stringclasses 4
values | deadline
timestamp[ns, tz=UTC]date 2025-09-02 00:00:00
2025-09-02 00:00:00
| lang
stringclasses 1
value | description
stringclasses 4
values | reference
stringclasses 4
values | gpu_types
sequencelengths 1
1
|
---|---|---|---|---|---|---|
398 | amd-identity | 2025-09-02T00:00:00 | py | This task is purely for testing the submission system. There will be *no* points.
> Input: (input_tensor, output_tensor)
> - input_tensor: Input data
> - output_tensor: Pre-allocated empty tensor of the same shape as `input_tensor`
> Output: Should return `output_tensor` after it has been filled with the values from `input_tensor`.`
| import torch
from task import input_t, output_t
from utils import make_match_reference
def generate_input(size: int, seed: int) -> input_t:
gen = torch.Generator(device='cuda')
gen.manual_seed(seed)
data = torch.empty(size, device='cuda', dtype=torch.float16)
data.uniform_(0, 1, generator=gen)
return data, torch.empty_like(data)
def ref_kernel(data: input_t) -> output_t:
input, output = data
output[...] = input
return output
check_implementation = make_match_reference(ref_kernel)
| [
"MI300"
] |
399 | amd-fp8-mm | 2025-09-02T00:00:00 | py |
You will implement a custom fp8-blockwise matmul kernel optimized for MI300.
You will be given single-precision scaling factors for your matrices.
The shapes of all outer and inner dimensions of tensors are from DeepSeek-R1.
To be explicit, you will be given a tuple of tensors:
```
(a, b, a_scale, b_scale, c)
```
where `a` and `b` are the input matrices, `a_scale` and `b_scale` are the scaling factors for `a` and `b` respectively,
and `c` is the output matrix:
* `a` is M x K in column-major order in e4m3fnuz
* `b` is N x K in column-major order in e4m3fnuz
* `a_scale` is M x K // 128 in column-major order in fp32
* `b_scale` is N // 128 x K // 128 in column-major order in fp32
* `c` is M x N in ROW-major order in bf16
Matrix sizes `m` and `n` are divisible by 64, `k` is divisible by 128.
The ranking criteria is the geometric mean of the benchmark results.
For the grand price, your kernel will be evaluated against the speed of light analysis
and the solution closest to the speed of light will be awarded the grand price.
```
The speed of light analysis is:
M N K time[us]
1024 1536 7168 8.63
1024 4608 7168 25.89
6144 1536 7168 51.78
6144 4608 7168 155.30
1024 7168 256 3.17
6144 7168 256 17.27
```
| import torch
from task import input_t, output_t
from utils import make_match_reference
block_shape = (128, 128)
def generate_input(m: int, n: int, k: int, seed: int) -> input_t:
"""
Generate random input and weights for Blockwise W8A8 Matmul scaled to FP32.
Returns:
Tuple of (
a: torch.Tensor[float8_e4m3fnuz] of shape [m, k],
b: torch.Tensor[float8_e4m3fnuz] of shape [n, k],
a_scale: torch.Tensor[float32] of shape [m, k // 128],
b_scale: torch.Tensor[float32] of shape [n // 128, k // 128],
c: torch.Tensor[bfloat16] of shape [m, n]
)
"""
gen = torch.Generator(device='cuda')
gen.manual_seed(seed)
block_shape_n, block_shape_k = block_shape
scale_n = (n + block_shape_n - 1) // block_shape_n
scale_k = (k + block_shape_k - 1) // block_shape_k
# Generate random inputs with FP8 quantization
a = (torch.randn((k, m), dtype=torch.bfloat16, device="cuda", generator=gen)).to(torch.float8_e4m3fnuz)
b = (torch.randn((k, n), dtype=torch.bfloat16, device="cuda", generator=gen)).to(torch.float8_e4m3fnuz)
# Generate scaling factors with FP32
a_scale = torch.randn([scale_k, m], dtype=torch.float32, device="cuda", generator=gen)
b_scale = torch.randn([scale_k, scale_n], dtype=torch.float32, device="cuda", generator=gen)
c = torch.zeros((m, n), dtype=torch.bfloat16, device="cuda")
return (a.T, b.T, a_scale.T, b_scale.T, c)
def ref_kernel(data: input_t) -> output_t:
"""
Highly inefficient torch reference implementation of FP8 GEMM.
You can use this as a reference / starting template for your implementation.
"""
# c: [m, n] is pre-allocated memory to help remove allocation overhead.
a, b, a_scale, b_scale, c = data
# a is M x K in column-major order, we convert here for simplicity.
a = a.contiguous()
a_scale = a_scale.contiguous()
b_scale = b_scale.contiguous()
# constants
m = a.shape[0]
n = b.shape[0]
k = a.shape[1]
block_shape_n = 128
block_shape_k = 128
scale_n = b_scale.shape[0]
scale_k = b_scale.shape[1]
# Apply blockwise scaling to input 'a'
a_scale = a_scale.unsqueeze(-1).repeat(1, 1, block_shape_k) # Shape: [m, scale_k, block_shape_k]
a_scale = a_scale.reshape(m, scale_k * block_shape_k)
a_scale = a_scale[:, :k]
# Dequantize 'a', in your implementation you should do this at the end.
a = a.to(a_scale.dtype) * a_scale
# Apply blockwise scaling to input 'b'
b_scale = (
b_scale.view(-1, 1)
.repeat(1, block_shape_n * block_shape_k)
.view(scale_n, scale_k, block_shape_n, block_shape_k)
.permute(0, 2, 1, 3) # Reorder dimensions: [scale_n, blk_n, scale_k, blk_k]
.reshape(scale_n * block_shape_n, scale_k * block_shape_k)
)
b_scale = b_scale[:n, :k]
# Dequantize 'b', in your implementation you should do this at the end.
b = b.to(b_scale.dtype) * b_scale
# Compute FP8 GEMM and write to 'c'.
c[...] = (a @ b.T).to(torch.bfloat16)
return c
check_implementation = make_match_reference(ref_kernel, rtol=2e-02, atol=1e-03)
| [
"MI300"
] |
430 | amd-mixture-of-experts | 2025-09-02T00:00:00 | py | For a more complete description, see: https://tinyurl.com/amd-comp-moe
Implement a DeepSeek-style Mixture of Experts (MoE) layer for efficient transformer models
on a single MI300X device.
MoE is a technique that allows scaling model capacity without proportionally increasing computational costs
by using a routing mechanism to selectively activate only a subset of parameters for each token.
Your task:
- Implement token routing using a simple softmax-based learned router
- Route tokens to the top-k experts based on router probabilities
- Process tokens through their assigned experts
- Combine expert outputs weighted by router probabilities
- Calculate appropriate auxiliary losses for training stability
Input:
- `data`: Tuple of (input: torch.Tensor, weights: Dict[str, torch.Tensor], config: Dict)
- input: Input tensor of shape [bs, seq_len, d_hidden]
- weights: Dictionary containing model weights
- config: Dictionary containing model configuration parameters
Output:
- Tuple containing:
- output: Processed tensor [bs, seq_len, d_model]
- aux_data: Dictionary with auxiliary data like router probabilities and losses
| from utils import make_match_reference
from task import input_t, output_t
import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import Dict, Tuple, List, Optional
import math
# Reference code in PyTorch
class Expert(nn.Module):
def __init__(self, config: Dict, d_expert: Optional[int] = None):
super().__init__()
self.config = config
self.act_fn = nn.SiLU()
self.d_hidden: int = config["d_hidden"]
self.d_expert: int = config["d_expert"] if d_expert is None else d_expert
self.W_gate = nn.Linear(self.d_hidden, self.d_expert, bias=False)
self.W_up = nn.Linear(self.d_hidden, self.d_expert, bias=False)
self.W_down = nn.Linear(self.d_expert, self.d_hidden, bias=False)
def forward(self, x: torch.Tensor) -> torch.Tensor:
gate = self.act_fn(self.W_gate(x))
out = self.W_down(gate * self.W_up(x))
return out
class MoEGate(nn.Module):
def __init__(self, config: Dict):
super().__init__()
self.top_k: int = config["n_experts_per_token"]
self.num_experts: int = config["n_routed_experts"]
self.d_hidden: int = config["d_hidden"]
self.W_g = nn.Linear(self.d_hidden, self.num_experts, bias=False)
def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
logits = self.W_g(x)
scores = logits.softmax(dim=-1)
topk_scores, topk_indices = torch.topk(scores, k=self.top_k, dim=-1, sorted=False)
return topk_indices, topk_scores
class MoE(nn.Module):
def __init__(self, config: Dict):
super().__init__()
self.config = config
self.experts = nn.ModuleList([
Expert(config)
for _ in range(config["n_routed_experts"])
])
self.gating_network = MoEGate(config)
shared_expert_dim = config["d_expert"] * config["n_shared_experts"]
self.shared_expert = Expert(config=config, d_expert=shared_expert_dim)
def forward(self, x: torch.Tensor) -> torch.Tensor:
shared_output = self.shared_expert(x)
expert_indices, expert_scores = self.gating_network(x)
batch_size, seq_len, hidden_dim = x.shape
orig_shape = x.shape
x_flat = x.view(-1, hidden_dim)
flat_expert_indices = expert_indices.view(-1)
flat_expert_weights = expert_scores.view(-1, 1)
routed_output_flat = self.moe_infer(x_flat,
flat_expert_indices,
flat_expert_weights)
routed_output = routed_output_flat.view(*orig_shape)
return routed_output + shared_output
@torch.no_grad()
def moe_infer(self,
x: torch.Tensor,
flat_expert_indices: torch.Tensor,
flat_expert_weights: torch.Tensor
) -> torch.Tensor:
expert_cache = torch.zeros_like(x)
idxs = flat_expert_indices.argsort()
counts = flat_expert_indices.bincount().cpu().numpy()
tokens_per_expert = counts.cumsum()
num_per_tok = self.config["n_experts_per_token"]
token_idxs = idxs // num_per_tok
for expert_id, end_idx in enumerate(tokens_per_expert):
start_idx = 0 if expert_id == 0 else tokens_per_expert[expert_id - 1]
if start_idx == end_idx:
continue
expert = self.experts[expert_id]
exp_token_idxs = token_idxs[start_idx:end_idx]
expert_tokens = x[exp_token_idxs]
expert_out = expert(expert_tokens)
expert_out.mul_(flat_expert_weights[idxs[start_idx:end_idx]])
expert_cache.scatter_reduce_(
0,
exp_token_idxs.view(-1, 1).repeat(1, x.shape[-1]),
expert_out,
reduce='sum'
)
return expert_cache
def ref_kernel(data: input_t) -> output_t:
"""
Reference implementation of DeepSeek-style Mixture of Experts using PyTorch.
Args:
data: Tuple of (input: torch.Tensor, weights: Dict[str, torch.Tensor], config: Dict)
- input: Input tensor of shape [batch_size, seq_len, hidden_dim]
- weights: Dictionary containing model weights
- config: Dictionary containing model configuration parameters
Returns:
Tuple containing:
- output: Processed tensor [batch_size, seq_len, d_model]
- aux_data: Dictionary with auxiliary data
"""
input_tensor, weights, config = data
num_experts = config["n_routed_experts"]
moe = MoE(config)
# Fill in the given weights of the model
moe.gating_network.W_g.weight = nn.Parameter(weights['router.weight'])
for i in range(num_experts):
gate_proj_weight = weights[f'experts.{i}.0.weight']
up_proj_weight = weights[f'experts.{i}.1.weight']
down_proj_weight = weights[f'experts.{i}.2.weight']
# Transpose weights to match expected shape for nn.Linear
moe.experts[i].W_gate.weight = nn.Parameter(gate_proj_weight.t())
moe.experts[i].W_up.weight = nn.Parameter(up_proj_weight.t())
moe.experts[i].W_down.weight = nn.Parameter(down_proj_weight.t())
moe.shared_expert.W_gate.weight = nn.Parameter(weights['shared_experts.0.weight'].t())
moe.shared_expert.W_up.weight = nn.Parameter(weights['shared_experts.1.weight'].t())
moe.shared_expert.W_down.weight = nn.Parameter(weights['shared_experts.2.weight'].t())
output = moe(input_tensor)
return output
# Input generation for the reference code
def generate_input(
dhidden: int,
dexpert: int,
nroutedexperts: int,
nsharedexperts: int,
nexpertspertoken: int,
bs: int,
seqlen: int,
seed: int
) -> input_t:
# Really dumb but for now _ isn't parsing correctly.
d_hidden = dhidden
d_expert = dexpert
n_routed_experts = nroutedexperts
n_shared_experts = nsharedexperts
n_experts_per_token = nexpertspertoken
batch_size = bs
seq_len = seqlen
config = {
"d_hidden": d_hidden,
"d_expert": d_expert,
"n_routed_experts": n_routed_experts,
"n_shared_experts": n_shared_experts,
"n_experts_per_token": n_experts_per_token,
"batch_size": batch_size,
"seq_len": seq_len,
}
gen = torch.Generator(device='cuda')
gen.manual_seed(seed)
num_experts = n_routed_experts
expert_dim = d_expert
weights = {}
input_tensor = torch.randn(
(batch_size, seq_len, d_hidden),
device='cuda',
dtype=torch.float16,
generator=gen
).contiguous()
# Initialize router weights
weights['router.weight'] = torch.randn(
(num_experts, d_hidden),
device="cuda",
dtype=torch.float16,
generator=gen
) / math.sqrt(d_hidden)
for i in range(num_experts):
weights[f'experts.{i}.0.weight'] = torch.randn(
(d_hidden, expert_dim),
device='cuda',
dtype=torch.float16,
generator=gen
) / math.sqrt(expert_dim)
weights[f'experts.{i}.1.weight'] = torch.randn(
(d_hidden, expert_dim),
device='cuda',
dtype=torch.float16,
generator=gen
) / math.sqrt(expert_dim)
weights[f'experts.{i}.2.weight'] = torch.randn(
(expert_dim, d_hidden),
device='cuda',
dtype=torch.float16,
generator=gen
) / math.sqrt(d_hidden)
weights['shared_experts.0.weight'] = torch.randn(
(d_hidden, expert_dim * n_shared_experts),
device='cuda',
dtype=torch.float16,
generator=gen
) / math.sqrt(expert_dim * n_shared_experts)
weights['shared_experts.1.weight'] = torch.randn(
(d_hidden, expert_dim * n_shared_experts),
device='cuda',
dtype=torch.float16,
generator=gen
) / math.sqrt(expert_dim * n_shared_experts)
weights['shared_experts.2.weight'] = torch.randn(
(expert_dim * n_shared_experts, d_hidden),
device='cuda',
dtype=torch.float16,
generator=gen
) / math.sqrt(d_hidden)
return (input_tensor, weights, config)
check_implementation = make_match_reference(ref_kernel, rtol=1e-2, atol=1e-2) | [
"MI300"
] |
463 | amd-mla-decode | 2025-09-02T00:00:00 | py | You will implement a custom mla decode kernel optimized for MI300, a few things simplified here:
1. Q, K, V data type as bfloat16
2. decode only with pre-allocated non-paged latent kv cache
3. return the update kv cache with MLA output
The shapes of all outer and inner dimensions of tensors are from DeepSeek-R1, and split number of heads to fit in one GPU.
To be explicit, you will be given a tuple to tensors:
```yml
input [bs, sq, dim]
attn_output [bs, n_heads, sq, v_head_dim]
kv_cache [bs, sq, kv_lora_rank + qk_rope_head_dim]
```
where
0. bs::128 # batch size
1. prefill::[512, 2048, 4096, 6144] # as kv length
2. sq::1 # as only consider decoding
3. dim::7168 # hidden size of deepseek v3
4. kv_lora_rank::[512] # kv lora rank of deepseek v3
5. qk_rope_head_dim::[64] # rope embedding dimension
6. v_head_dim::128 # head size
7. n_heads::128 # num of attn heads
The ranking criteria is the geometric mean of the benchmark results.
For the grand prize, your kernel will be evaluated against the speed of light analysis
and the solution closest to the speed of light will be awarded the grand prize.
The speed of light analysis is::
| bs | prefill | sq | dtype | roofline time(us) |
|---|---|---|---|---|
| 128 | 512 | 1 | bf16 | 54.62 |
| 128 | 2048 | 1 | bf16 | 141.16 |
| 128 | 4096 | 1 | bf16 | 210.75 |
| 128 | 6144 | 1 | bf16 | 280.87 |
| import math
from dataclasses import dataclass
import torch
from torch import nn
import torch.nn.functional as F
from task import input_t, output_t
from utils import make_match_reference
class RoPE(nn.Module):
def __init__(self, d_model: int):
super().__init__()
self.d_model = d_model
theta = 10000 ** (-torch.arange(0, d_model//2,dtype=torch.bfloat16) / (d_model//2))
self.register_buffer("theta", theta)
def rotate_half(self, x: torch.Tensor) -> torch.Tensor:
x1, x2 = x.chunk(2, dim=-1)
return torch.cat((-x2, x1), dim=-1)
def forward(self, x: torch.Tensor, start_pos: int = 0) -> torch.Tensor:
seq_len = x.size(-2)
d_model = x.size(-1)
assert d_model == self.d_model
seq_idx = torch.arange(start_pos, start_pos + seq_len, device=x.device)
idx_theta = torch.einsum('s,d->sd', seq_idx, self.theta)
idx_theta2 = torch.cat([idx_theta, idx_theta], dim=-1)
cos = idx_theta2.cos().to(torch.bfloat16)
sin = idx_theta2.sin().to(torch.bfloat16)
return x * cos + self.rotate_half(x) * sin
class KVCache(nn.Module):
def __init__(self, kv_cache_shape: tuple, **kwargs) -> None:
super().__init__(**kwargs)
self.register_buffer('data', torch.zeros(kv_cache_shape, dtype=torch.bfloat16))
self.seq_len = 0
self.zero()
def zero(self) -> None:
self.data.zero_()
def get_data(self) -> torch.Tensor:
return self.data
def forward(self, c_kv: torch.Tensor) -> torch.Tensor:
assert self.seq_len + c_kv.size(1) <= self.data.size(1), "KV Cache Exceeded"
self.data = self.data.to(c_kv.dtype)
self.data[
:, self.seq_len : self.seq_len + c_kv.size(1), :
] = c_kv
self.seq_len += c_kv.size(1)
return self.data[:, :self.seq_len], self.seq_len
@dataclass
class Config:
batch_size: int
dim: int
n_heads: int
q_lora_rank: int
kv_lora_rank: int
qk_nope_head_dim: int
qk_rope_head_dim: int
v_head_dim: int
seq_len: int
max_seq_len: int
kv_cache_shape: tuple
Q_proj_down_weight: torch.Tensor
Q_proj_up_weight: torch.Tensor
KV_proj_down_weight: torch.Tensor
KV_proj_up_weight: torch.Tensor
wo_weight: torch.Tensor
class MLA(nn.Module):
def __init__(self, config: Config):
super().__init__()
self.dim = config.dim
self.n_heads = config.n_heads
self.q_lora_rank = config.q_lora_rank
self.kv_lora_rank = config.kv_lora_rank
self.nope_head_dim = config.qk_nope_head_dim
self.rope_head_dim = config.qk_rope_head_dim
self.v_head_dim = config.v_head_dim
# Down-projection matrices
self.Q_proj_down = nn.Linear(self.dim, self.q_lora_rank, dtype=torch.bfloat16, bias=False)
self.KV_proj_down = nn.Linear(self.dim, self.kv_lora_rank + self.rope_head_dim, dtype=torch.bfloat16, bias=False)
# Up-projection and rope projection matrices
self.Q_proj_up = nn.Linear(self.q_lora_rank, (self.nope_head_dim + self.rope_head_dim) * self.n_heads, dtype=torch.bfloat16, bias=False)
self.KV_proj_up = nn.Linear(self.kv_lora_rank, (self.nope_head_dim + self.v_head_dim) * self.n_heads, dtype=torch.bfloat16, bias=False)
# RoPE on half embeddings
self.q_rope = RoPE(self.rope_head_dim)
self.k_rope = RoPE(self.rope_head_dim)
# Output projection
self.wo = nn.Linear(self.v_head_dim * self.n_heads, self.dim, dtype=torch.bfloat16, bias=False)
self.eps = 1e-6
def forward(self, x: torch.Tensor, kv_cache: KVCache) -> torch.Tensor:
# seq_len = 1 always here
batch_size, seq_len, model_dim = x.size()
################################################################################
# Step 1: Handle down-projection + KV cache #
################################################################################
q_lora = self.Q_proj_down(x)
kv_lora = self.KV_proj_down(x)
kv_lora, kv_len = kv_cache(kv_lora)
query_pos = kv_len - 1
################################################################################
# Step 2: Up-project and prepare NoPE + RoPE #
################################################################################
# Handle queries Q first
q_nope_and_rope = self.Q_proj_up(q_lora).view(
batch_size, seq_len, self.n_heads, self.nope_head_dim + self.rope_head_dim)
q_nope, q_rope = torch.split(q_nope_and_rope, [self.nope_head_dim, self.rope_head_dim], dim=-1)
# Handle keys and values K/V. V does not need RoPE
kv_nope, k_rope = torch.split(kv_lora, [self.kv_lora_rank, self.rope_head_dim], dim=-1)
kv_nope = self.KV_proj_up(kv_nope).view(
batch_size, kv_len, self.n_heads, self.nope_head_dim + self.v_head_dim)
k_nope, v = torch.split(kv_nope, [self.nope_head_dim, self.v_head_dim], dim=-1)
################################################################################
# Step 3: Handle RoPE Stream #
################################################################################
# Compute RoPE for queries and combine with no-RoPE part
q_rope = q_rope.permute(0, 2, 1, 3) # bs x n_heads x seq_len x rope_head_dim
q_rope = self.q_rope(q_rope, start_pos=query_pos)
q_nope = q_nope.permute(0, 2, 1, 3) # bs x n_heads x seq_len x rope_head_dim
q = torch.concat([q_nope, q_rope], dim=-1)
# Compute RoPE for keys and combine with no-RoPE part
k_rope = k_rope[:, None, :, :]
k_rope = self.k_rope(k_rope).expand(-1,self.n_heads,-1,-1)
k_nope = k_nope.permute(0, 2, 1, 3) # bs x kv_len x n_heads x rope_head_dim
k = torch.concat([k_nope, k_rope], dim=-1)
################################################################################
# Compute Multi-head Attention #
################################################################################
v = v.permute(0, 2, 1, 3) # bs x n_heads x kv_len x v_head_dim
scores = torch.matmul(q, k.transpose(-1, -2)) / math.sqrt(self.rope_head_dim + self.nope_head_dim)
attn = F.softmax(scores, dim=-1).to(torch.bfloat16)
y = torch.matmul(attn, v).view(batch_size, 1, -1)
y = self.wo(y)
return y, kv_cache.get_data()
def generate_input(batchsize, dim, dq, prefill, seed):
# Sizes derived from: https://github.com/deepseek-ai/DeepSeek-V3/blob/main/inference/model.py
gen = torch.Generator(device='cuda')
gen.manual_seed(seed)
# Generate weights for linear layers
Q_proj_down_weight = torch.randn((dq, dim), dtype=torch.bfloat16, generator=gen, device='cuda') / math.sqrt(dim)
KV_proj_down_weight = torch.randn((512 + 64, dim), dtype=torch.bfloat16, generator=gen, device='cuda') / math.sqrt(dim)
Q_proj_up_weight = torch.randn(((128 + 64) * 128, dq), dtype=torch.bfloat16, generator=gen, device='cuda') / math.sqrt(dq)
KV_proj_up_weight = torch.randn(((128 + 128) * 128, 512), dtype=torch.bfloat16, generator=gen, device='cuda') / math.sqrt(512)
wo_weight = torch.randn((dim, 128 * 128), dtype=torch.bfloat16, generator=gen, device='cuda') / math.sqrt(128 * 128)
config = Config(
batch_size=batchsize,
dim=dim,
q_lora_rank=dq,
n_heads=128,
kv_lora_rank=512,
qk_nope_head_dim=128,
qk_rope_head_dim=64,
v_head_dim=128,
seq_len=1,
max_seq_len=8192,
kv_cache_shape=(batchsize, 8192, 512 + 64),
Q_proj_down_weight=Q_proj_down_weight,
Q_proj_up_weight=Q_proj_up_weight,
KV_proj_down_weight=KV_proj_down_weight,
KV_proj_up_weight=KV_proj_up_weight,
wo_weight=wo_weight,
)
x = torch.randn((config.batch_size, 1, config.dim), dtype=torch.bfloat16, generator=gen, device='cuda')
# Pre-fill KV cache
kv_cache = KVCache((config.batch_size, config.max_seq_len, config.kv_lora_rank + config.qk_rope_head_dim)).to('cuda')
pre_filled_cache = torch.randn((config.batch_size, prefill, config.kv_lora_rank + config.qk_rope_head_dim),
dtype=torch.bfloat16, generator=gen, device='cuda')
kv_cache(pre_filled_cache)
return config, x, kv_cache
def ref_kernel(data: input_t) -> output_t:
config, x, kv_cache = data
# Load in model weights
model = MLA(config).to('cuda')
model.Q_proj_down.weight = nn.Parameter(config.Q_proj_down_weight)
model.Q_proj_up.weight = nn.Parameter(config.Q_proj_up_weight)
model.KV_proj_down.weight = nn.Parameter(config.KV_proj_down_weight)
model.KV_proj_up.weight = nn.Parameter(config.KV_proj_up_weight)
model.wo.weight = nn.Parameter(config.wo_weight)
output, kv_cache = model(x, kv_cache)
return output, kv_cache
check_implementation = make_match_reference(ref_kernel, rtol=2e-02, atol=8e-03)
def time_mla(model, x, kv_cache, num_warmup=3, num_trials=5):
# Warmup runs
for _ in range(1):
output, _ = model(x, kv_cache)
torch.cuda.synchronize()
# Timed runs
times = []
for _ in range(num_trials):
kv_cache = KVCache((config.batch_size, config.max_seq_len, config.kv_lora_rank + config.qk_rope_head_dim)).to('cuda')
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
start.record()
output, updated_kv = model(x, kv_cache)
end.record()
torch.cuda.synchronize()
times.append(start.elapsed_time(end))
avg_time = sum(times) / len(times)
return output, updated_kv, avg_time, times
if __name__ == "__main__":
# Generate test input
batchsize = 128
dim = 7168
dq = 1536
prefill = 512
seed = 97
# Create model and inputs
config, x, kv_cache = generate_input(batchsize, dim, dq, prefill, seed)
model = MLA(config).to('cuda')
# Run model with timing
output, updated_kv, avg_time, times = time_mla(model, x, kv_cache)
# Test reference kernel
ref_output, ref_kv = ref_kernel((config, x, kv_cache))
print("\nReference kernel output:")
print(f"Output shape: {ref_output.shape}")
print(f"KV cache shape: {ref_kv.shape}")
print("\nFirst few values of reference output:")
print(ref_output[0, :10])
# Compare outputs
print("\nOutput difference:")
print(f"Max absolute difference: {torch.max(torch.abs(output - ref_output))}")
print(f"Mean absolute difference: {torch.mean(torch.abs(output - ref_output))}")
print(f"Input shape: {x.shape}")
print(f"Output shape: {output.shape}")
print(f"Updated KV cache shape: {updated_kv.shape}")
print("\nFirst few values of output:")
print(output[0, :10])
print(f"\nTiming results over {len(times)} runs (ms):")
print(f"Average: {avg_time:.2f}")
print(f"Individual times: {[f'{t:.2f}' for t in times]}")
| [
"MI300"
] |
If you use GPUMODE/amd-kernels-2025 in your work, please cite:
@inproceedings{
zhang2025kernelbot,
title={KernelBot: A Competition Platform for Writing Heterogeneous {GPU} Code},
author={Alex L Zhang and Matej Sirovatka and Erik Schultheis and Benjamin Horowitz and Mark Saroufim},
booktitle={Championing Open-source DEvelopment in ML Workshop @ ICML25},
year={2025},
url={https://openreview.net/forum?id=bq9U4dmuyJ}
}
- Downloads last month
- 60