Zacharias030 commited on
Commit
19545c6
·
verified ·
1 Parent(s): f5663dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -33,9 +33,9 @@ We finetuned Llama3.1-8B-Instruct on the created dataset using supervised instru
33
 
34
  | Model | Parameters (B) | Score | Pass@k |
35
  |-------|---------------|-------|--------|
36
- | KernelLLM | 8 | 15.5 | 1 |
37
- | KernelLLM | 8 | 34.7 | 10 |
38
- | KernelLLM | 8 | 39.8 | 20 |
39
  | DeepSeek V3 | 671 | 16 | 1 |
40
  | GPT-4o | ~200 | 15 | 1 |
41
  | Qwen2.5 | 32 | 15 | 1 |
@@ -127,7 +127,7 @@ model = KernelLLM()
127
  model.stream_raw("Your prompt here", max_new_tokens=2048)
128
 
129
  # Generate raw text without the Triton-specific prompt template
130
- raw_output = model.generate_raw("Your prompt here", temperature=0.6, max_new_tokens=2048)
131
  ```
132
 
133
  ## Current Limitations and Future Work
 
33
 
34
  | Model | Parameters (B) | Score | Pass@k |
35
  |-------|---------------|-------|--------|
36
+ | KernelLLM | 8 | 20.2 | 1 |
37
+ | KernelLLM | 8 | 51.8 | 10 |
38
+ | KernelLLM | 8 | 57.1 | 20 |
39
  | DeepSeek V3 | 671 | 16 | 1 |
40
  | GPT-4o | ~200 | 15 | 1 |
41
  | Qwen2.5 | 32 | 15 | 1 |
 
127
  model.stream_raw("Your prompt here", max_new_tokens=2048)
128
 
129
  # Generate raw text without the Triton-specific prompt template
130
+ raw_output = model.generate_raw("Your prompt here", temperature=1.0, max_new_tokens=2048)
131
  ```
132
 
133
  ## Current Limitations and Future Work