Satori‑SWE‑RL‑32B

Overview

🚀 Satori-SWE-RL-32B is trained specifically to resolve software engineering tasks efficiently, using our proposed EvoScale test-time scaling technique, and a novel training framework: two-stage SFT and RL. The model can iteratively self-improve its own generation to progressively write a better patch.

Training Data

Resources

🔗 GitHub Repository: Satori-SWE

🔗 Blog Post: Blog

🔗 Research Paper: Paper

Prompt Template

classical_sft_prompt = """You are an expert software engineer and seasoned code reviewer, specializing in bug localization and code optimization within real-world code repositories. Your strengths lie in understanding complex codebase structures and precisely identifying and modifying the relevant parts of the code to resolve issues. You also excel at articulating your reasoning process in a coherent, step-by-step manner that leads to efficient and correct bug fixes.

You will be provided with a codebase and an issue description. Your task is to simulate a complete reasoning process—step-by-step—as if solving the issue from scratch, followed by the code modifications to resolve the issue.

---

# Issue Statement
{problem_statement}

---

# Files to be Modified
Below are some code files that might be relevant to the issue above. One or more of these files may contain bugs.

{files}

---

# Reasoning Guidelines
Your reasoning process should generally follow these steps, with flexibility to adjust as needed for clarity and accuracy:

1. **Issue Analysis**: Start by thoroughly analyzing the issue. Explain what the problem is, why it matters, and what the intended behavior should be. Identify the key goals and constraints that must be addressed in your solution.

2. **Task Decomposition**: Break down the issue into smaller, manageable sub-tasks. Describe the purpose of each sub-task and how it contributes to solving the overall problem.

3. **Code Localization and Editing**: For each sub-task:
   - Identify relevant code snippets by file path and code location.
   - Explain how each snippet relates to the sub-task.
   - Describe how the code should be changed and justify your reasoning.
   - After thorough explanation, provide the corresponding edited code.

---

# General Requirements
1. **Clear and Evidence-Based Reasoning**: Provide clear and precise reasoning for each step, strictly based on the provided issue and code without inferring information not explicitly stated.
2. **Comprehensive and Concise**: Address all relevant aspects of the issue comprehensively while being concise. Justify the exclusion of any sections that are not relevant.
3. **Detailed Guidance**: Ensure the reasoning steps are detailed enough to allow someone unfamiliar with the solution to infer and implement the necessary code modifications.

---

# Response Format
1. The reasoning process should be enclosed in <think> ... </think>.
2. The final patch should be output in a standalone Python code block *after* the </think> block.
3. Do not include any commentary or justification after the </think> block.

---

# Patch Format
Please generate *SEARCH/REPLACE* edits to fix the issue. Every *SEARCH/REPLACE* edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: =======
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE

If, in `Files to be Modified` part, there are multiple files or multiple locations in a single file require changes. You should provide separate patches for each modification, clearly indicating the file name and the specific location of the modification. 

Please note that the *SEARCH/REPLACE* edit REQUIRES PROPER INDENTATION. For example, if you would like to add the line '        print(x)', you must fully write that out, with all those spaces before the code! And remember to wrap the *SEARCH/REPLACE* edit in blocks ```python...```

# Example Response
<think>
1. Analyze the issue...
2. Locate the relevant code...
3. Apply necessary changes...
</think>

```python
### mathweb/flask/app.py
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
>>>>>>> REPLACE
```

```python
### mathweb/utils/calc.py
<<<<<<< SEARCH
def calculate_area(radius):
    return 3.14 * radius * radius
=======
def calculate_area(radius):
    return math.pi * radius ** 2
>>>>>>> REPLACE
```

---

Please provide your response below.

"""
mutation_sft_prompt = """You are an expert software engineer and seasoned code reviewer, specializing in bug localization and code optimization, with a particular talent for critically evaluating teammates' patches and synthesizing high-quality, precise solutions from collaborative efforts.

You will be presented with a GitHub issue, the relevant source code files, and five *candidate patches* submitted by your teammates. Your task is twofold:

1. **Patch Review**: Carefully evaluate each of the five candidate patches **individually**. Identify whether each patch resolves the issue correctly, partially, or incorrectly. If you identify any issues (e.g., logical errors, misunderstandings of the bug, overlooked edge cases, or incomplete fixes), explain them clearly and suggest what could be improved or corrected. 
   
   Even if a patch appears mostly correct, you should still analyze its strengths and limitations in detail. Treat this as a collaborative peer-review process: constructive, technical, and focused on improving code quality.

2. **Patch Synthesis**: After analyzing all five candidate patches, synthesize your understanding to produce your **own final code patch** that fully resolves the issue. Your patch should:
   - Be grounded solely in the issue description and provided source code.
   - Be informed by your peer review, but not copy any one patch outright.

---

# Issue Statement
{problem_statement}

---

# Files to be Modified
Below are some code files that might be relevant to the issue above. One or more of these files may contain bugs.

{files}

---

# Candidate Patches (From Collaborators)
Below are five proposed patches submitted by your teammates. You will evaluate them individually.
{candidate_patches}

---

# Reasoning and Review Guidelines

Your response should be structured into two parts:

## Part 1: Peer Patch Review
For each of the five candidate patches:
   - Analyze the candidate patch's intent and correctness.
   - Identify what it does well, what it gets wrong (if anything), and how it could be improved.
   - Use precise references to the provided issue and source code files to justify your evaluation.
      
## Part 2: Final Patch Synthesis
After completing all five reviews, your reasoning process should generally follow these steps, with flexibility to adjust as needed for clarity and accuracy:

1. **Issue Analysis**: Start by thoroughly analyzing the issue. Explain what the problem is, why it matters, and what the intended behavior should be. Identify the key goals and constraints that must be addressed in your solution.

2. **Task Decomposition**: Break down the issue into smaller, manageable sub-tasks. Describe the purpose of each sub-task and how it contributes to solving the overall problem.

3. **Code Localization and Editing**: For each sub-task:
   - Identify relevant code snippets by file path and code location.
   - Explain how each snippet relates to the sub-task.
   - Describe how the code should be changed and justify your reasoning.
   - After thorough explanation, provide the corresponding edited code.

---

# General Requirements
1. **Clear and Evidence-Based Reasoning**: Provide clear and precise reasoning for each step, strictly based on the provided issue and code without inferring information not explicitly stated.
2. **Comprehensive and Concise**: Address all relevant aspects of the issue comprehensively while being concise. Justify the exclusion of any sections that are not relevant.
3. **Detailed Guidance**: Ensure the reasoning steps are detailed enough to allow someone unfamiliar with the solution to infer and implement the necessary code modifications.

---

# Response Format
1. The reasoning process should be enclosed in <think> ... </think>.
2. The final patch should be output in a standalone Python code block *after* the </think> block.
3. Do not include any commentary or justification after the </think> block.

---

# Patch Format
Please generate *SEARCH/REPLACE* edits to fix the issue. Every *SEARCH/REPLACE* edit must use this format:
1. The file path
2. The start of search block: <<<<<<< SEARCH
3. A contiguous chunk of lines to search for in the existing source code
4. The dividing line: =======
5. The lines to replace into the source code
6. The end of the replace block: >>>>>>> REPLACE

If, in `Files to be Modified` part, there are multiple files or multiple locations in a single file require changes. You should provide separate patches for each modification, clearly indicating the file name and the specific location of the modification. 

Please note that the *SEARCH/REPLACE* edit REQUIRES PROPER INDENTATION. For example, if you would like to add the line '        print(x)', you must fully write that out, with all those spaces before the code! And remember to wrap the *SEARCH/REPLACE* edit in blocks ```python...```

# Example Response
<think>
1. Review of candidate patch: 
   - Review of patch-1: This patch attempts to fix X by modifying function Y. However, it fails to consider Z...
   - Review of patch-2: ...
   - Review of patch-3: ...
   - Review of patch-4: ...
   - Review of patch-5: ...
2. Analyze the issue by myself...
3. Locate the relevant code...
4. Apply necessary changes...
</think>

```python
### mathweb/flask/app.py
<<<<<<< SEARCH
from flask import Flask
=======
import math
from flask import Flask
>>>>>>> REPLACE
```

```python
### mathweb/utils/calc.py
<<<<<<< SEARCH
def calculate_area(radius):
    return 3.14 * radius * radius
=======
def calculate_area(radius):
    return math.pi * radius ** 2
>>>>>>> REPLACE
```

---

Please provide your response below.

"""

Usage: Toy Example

from vllm import LLM, SamplingParams

def generate(question, model_path):
    llm = LLM(
        model=model_path,
        trust_remote_code=True,
        tensor_parallel_size=8,
    )
    
    sampling_params = SamplingParams(
        max_tokens=8192,
        temperature=1.2,
        n=1,
    )
    outputs = llm.generate([question], sampling_params, use_tqdm=True)
    completions = [[output.text for output in output_item.outputs] for output_item in outputs]

    return completions
    
# Classical Inference
model_path = "Satori-reasoning/Satori-SWE-RL-32B"
problem_statement = """I'm running `missing_colon.py` as follows:

```python
division(23, 0)
```

but I get the following error:

```
  File "/Users/fuchur/Documents/24/git_sync/swe-agent-test-repo/tests/./missing_colon.py", line 4
    def division(a: float, b: float) -> float
                                             ^
SyntaxError: invalid syntax
```"""
file_str_concat = """```python
### src/testpkg/missing_colon.py
#!/usr/bin/env python3

def division(a: float, b: float) -> float
    return a/b


if __name__ == "__main__":
    print(division(123, 15))
```"""

completions = generate(
    classical_sft_prompt.format(
        problem_statement=problem_statement,
        files=file_str_concat
    ),
    model_path
)

for completion in completions:
    print(completion[0])

    
    
# Mutation inference

candidate_patches = """<patch>
```python
### src/testpkg/missing_colon.py
<<<<<<< SEARCH

def division(a: float, b: float) -> float
    return a/b


if __name__ == "__main__":
    print(division(123, 15))
=======

def division(a: float, b: float) -> float:
    return a/b


if __name__ == "__main__":
    print(division(123, 15))
>>>>>>> REPLACE
```
</patch>

<patch>
```python
### src/testpkg/missing_colon.py
<<<<<<< SEARCH

def division(a: float, b: float) -> float
    return a/b


if __name__ == "__main__":
    print(division(123, 15))
=======

def division(a: float, b: float) -> float:
    return a/b


if __name__ == "__main__":
    print(division(123, 15))
>>>>>>> REPLACE
```
</patch>

<patch>
```python
### src/testpkg/missing_colon.py
<<<<<<< SEARCH

def division(a: float, b: float) -> float
    return a/b


if __name__ == "__main__":
    print(division(123, 15))
=======

def division(a: float, b: float) -> float:
    return a/b


if __name__ == "__main__":
    print(division(123, 15))
>>>>>>> REPLACE
```
</patch>

<patch>
```python
### src/testpkg/missing_colon.py
<<<<<<< SEARCH

def division(a: float, b: float) -> float
    return a/b


if __name__ == "__main__":
    print(division(123, 15))
=======

def division(a: float, b: float) -> float:
    return a/b


if __name__ == "__main__":
    print(division(123, 15))
>>>>>>> REPLACE
```
</patch>

<patch>
```python
### src/testpkg/missing_colon.py
<<<<<<< SEARCH

def division(a: float, b: float) -> float
    return a/b


if __name__ == "__main__":
    print(division(123, 15))
=======

def division(a: float, b: float) -> float:
    return a/b


if __name__ == "__main__":
    print(division(123, 15))
>>>>>>> REPLACE
```
</patch>"""

mutation_completions = generate(
    mutation_sft_prompt.format(
        problem_statement=problem_statement,
        files=file_str_concat,
          candidate_patches,candidate_patches
    ),
    model_path
)

for mutation_completion in mutation_completions:
    print(mutation_completion[0])

Citation

If you find this model useful, please cite our paper:

@misc{zeng2025satorisweevolutionarytesttimescaling,
      title={Satori-SWE: Evolutionary Test-Time Scaling for Sample-Efficient Software Engineering}, 
      author={Guangtao Zeng and Maohao Shen and Delin Chen and Zhenting Qi and Subhro Das and Dan Gutfreund and David Cox and Gregory Wornell and Wei Lu and Zhang-Wei Hong and Chuang Gan},
      year={2025},
      eprint={2505.23604},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.23604}, 
}
Downloads last month
24
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Satori-reasoning/Satori-SWE-RL-32B

Base model

Qwen/Qwen2.5-32B
Finetuned
(1)
this model
Quantizations
2 models

Collection including Satori-reasoning/Satori-SWE-RL-32B