FrogBoss-32B-2510
| Field | Value |
|---|---|
| Developer | Microsoft Corporation Authorized representative: Microsoft Ireland Operations Limited 70 Sir John Rogerson’s Quay, Dublin 2, D02 R296, Ireland |
| Description | FrogBoss is a 32B-parameter coding agent specialized in fixing bugs in code. FrogBoss was obtained by fine‑tuning a Qwen3‑32B language model on debugging trajectories generated by Claude Sonnet 4 within the BugPilot framework. The training data combines real‑world bugs from R2E‑Gym, synthetic bugs from SWE‑Smith, and novel “FeatAdd” bugs. |
| Model architecture | FrogBoss is based on Qwen3‑32B, a transformer model with 64k context, optimized for multi‑turn debugging workflows. |
| Parameters | 32B |
| Inputs | Text (max input length: 46k tokens). |
| Context length | 64k tokens |
| Outputs | Text (max output length: 8k tokens). |
| GPUs | 8 × H100 |
| Training time | 1 day |
| Public data summary | Tech Report: https://arxiv.org/abs/2510.19898 |
| Blog Post: https://microsoft.github.io/debug-gym/blog/2025/10/bug-pilot | |
| Dates | August-October, 2025 |
| Status | Static model trained on offline dataset collected before September 2025. |
| Release date / EU release date | January 12th, 2026 |
| License | MIT |
| Model dependencies | https://huggingface.co/Qwen/Qwen3-32B |
| Additional related assets | N/A |
| Acceptable use policy | N/A |
Model overview
FrogBoss is built on the Qwen3-32B transformer architecture with a maximum context length of 64k tokens. The model uses multi-turn debugging workflows and complex code reasoning. Unlike general-purpose LLMs, FrogBoss is specialized for software engineering tasks.
The training procedure comes down to supervised fine-tuning (SFT) on successful debugging trajectories produced by a strong teacher model (e.g., claude-sonnet-4). Those trajectories were obtained from a mix of real-world and synthetic bug datasets (e.g., R2E-Gym, SWE-Smith) and high-quality FeatAdd bugs generated through the BugPilot framework. This approach ensures the model learns realistic debugging patterns rather than trivial fixes. Compared to other open-weight models, FrogBoss stands out for its parameter's efficiency, achieving state-of-the-art performance on SWE-Bench Verified (Pass@1: 45.0%) with 32B parameters, and its emphasis on realistic, multi-file debugging scenarios, making it more robust for real-world coding environments.
Alignment approach
The model was trained to ensure alignment with intended behavior of producing accurate bug identification and code patches by curating high-quality trajectories and removing failure patterns. Specifically, all unsuccessful debugging attempts were excluded from the training data to prevent reinforcement of ineffective strategies. Additionally, safeguards were applied to the teacher model to avoid “cheating” by dropping tasks that were failing. This approach ensures that the model consistently learns from successful problem-solving examples and produces reliable bug identification and code fix proposals aligned with developer expectations.
Usage
Primary use cases
FrogBoss is intended for software engineering and debugging tasks in controlled research environments, excelling at multi-turn reasoning, code repair, and feature-level bug resolution across complex repositories. It is optimized for scenarios such as automated bug fixing.
Intended Uses
- Debugging and repairing code in controlled environments.
- Automated resolution of software bugs across multi-file repositories.
- Research and development of agentic workflows for software engineering.
Out-of-scope use cases
FrogBoss has several limitations and constraints that users should be aware of. While it excels at debugging and multi-file code reasoning, it is restricted to text-based inputs and outputs and cannot process or generate images, audio, or video. The model may struggle with highly domain-specific codebases outside its training distribution and can produce incorrect or incomplete fixes if prompts are ambiguous. It is not designed for general-purpose text generation or tasks unrelated to software engineering.
Prohibited uses include generating harmful or insecure code, engaging in activities that violate legal or ethical standards, producing disallowed content (e.g., sexual, violent, hateful), or using the model for tasks unrelated to software development or outside a research setting.
Distribution channels
Model weights are available on HuggingFace and Azure AI Foundry.
Input formats
Given the nature of the training data, FrogBoss is best suited for prompts using the chat format as follows:
[
{
"role": "system",
"content": "The system prompt, followed by the list of descriptions of available functions, and a templatic function call example."
},
{
"role": "user",
"content": "The first user prompt, which includes a paragraph describing the problem statement, and a list of general instructions on bug fixing tasks."
},
...,
{
"role": "assistant",
"content": "The reasoning content generated by the agent in previous step.
<function=the_called_function_name>
<parameter=example_parameter_1>value_1</parameter>
</function>"
},
{
"role": "user",
"content": "The new observation returned from the environment in response to the agent's previous function call."
}
]
Using R2E-Gym Agent’s scaffolding
Clone R2E-Gym repository and install dependencies:
git clone https://github.com/R2E-Gym/R2E-Gym.git
cd R2E-Gym
pip install -e .
Serving the model
The recommended way to serve FrogBoss-32B-2510 is with vLLM.
vllm serve microsoft/FrogBoss-32B-2510 --tensor-parallel-size 4 \
--enable-prefix-caching \
--gpu-memory-utilization 0.9 \
--max-model-len 65536 \
--hf-overrides '{"max_position_embeddings": 65536}'
Example code snippet
import os
from r2egym.agenthub.environment.env import EnvArgs, RepoEnv
from r2egym.agenthub.agent.agent import AgentArgs, Agent
from pathlib import Path
from datasets import load_dataset
ds = load_dataset("R2E-Gym/SWE-Bench-Verified")["test"]
env_index = 100 # index of the environment [0, len(ds)]
env_args = EnvArgs(ds = ds[env_index])
env = RepoEnv(env_args)
agent_args = AgentArgs.from_yaml(Path('./src/r2egym/agenthub/config/r2egym/edit_non_fn_calling.yaml'))
os.environ["LLM_BASE_URL"] = "http://127.0.0.1:8000/v1"
agent_args.llm_name = 'hosted_vllm/microsoft/FrogBoss-32B-2510'
agent = Agent(name="EditingAgent", args=agent_args)
output = agent.run(env, max_steps=40, use_fn_calling=False)
Responsible AI considerations
The model may struggle with highly domain-specific codebases outside its training distribution and can produce incorrect or incomplete fixes if prompts are ambiguous. It is not designed for general-purpose text generation or tasks unrelated to software engineering. The user should keep these limitations in mind when choosing a use case.
Best Practices
- Always validate generated code for security and correctness.
- Use the model in environments with proper monitoring, guardrails, and sandboxing.
Data overview
Training, testing, and validation datasets
The training data consists of a collection of 9k debugging trajectories (i.e., sequence of tool calls, and code generation) produced by a strong teacher model (e.g., claude-sonnet-4). Those trajectories were obtained from a mix of real-world bugs (R2E-Gym), synthetic bugs (e.g., SWE-Smith), and high-quality FeatAdd bugs generated with the BugPilot framework. The composition of the dataset ensures the model learns realistic debugging patterns rather than trivial fixes.
Quality and performance evaluation
FrogBoss was evaluated on SWE-Bench Verified (500 problems) using the R2E-Gym agent scaffolding with 64k max context length, 100 max environment steps, and temperature of 1.0. We report Pass@1 accuracy averaged over 3 runs.
| Models | Scaffolds | Bugs | Trajectories | SWE‑Bench Verified (%) |
|---|---|---|---|---|
| FrogBoss‑32B‑2510 | R2E‑Gym | 3k | 9k | 54.6 |
| CWM‑32B | Agentless | - | - | 53.9 |
| SWE‑Mirror‑LM‑32B | OpenHands | 60k | 12k | 52.2 |
| FrogMini‑14B‑2510 | R2E‑Gym | 3k | 9k | 45.3 |
| DeepSWE‑32B‑Preview | R2E‑Gym | 4.6k | - | 42.2 |
| SWE‑Smith‑LM‑32B | SWE‑Agent | 50k | 5k | 40.2 |
| Skywork‑SWE‑32B | OpenHands | 10.1k | 8k | 38.0 |
| R2E‑Gym‑32B | R2E‑Gym | 4.6k | 4.5k | 34.4 |
| SWE‑Gym‑32B | OpenHands | 2.4k | 491 | 20.6 |
| Larger Open Weights Models | ||||
| GLM‑4.5‑358B | SWE‑Agent | - | - | 64.2 |
| Qwen3‑Coder‑480B | mini‑SWE‑Agent | - | - | 55.4 |
| GLM‑4.5‑358B | mini‑SWE‑Agent | - | - | 54.2 |
| DeepSeek‑R1‑0528 | OpenHands | - | - | 45.6 |
| SWE‑RL‑70B | Agentless | - | - | 41.0 |
| SWE‑Fixer‑72B | SWE‑Fixer | 110k | - | 32.8 |
| Proprietary Models | ||||
| Claude Sonnet 4 | Moatless Tools | - | - | 70.8 |
| Claude Sonnet 4 | SWE‑Agent | - | - | 66.6 |
| Claude Sonnet 4 | R2E‑Gym | - | - | 66.9 |
| GPT‑5 | R2E‑Gym | - | - | 65.7 |
| GPT‑4o | R2E‑Gym | - | - | 29.3 |
Long context
Our models don’t support long context.
Safety evaluation and red-teaming
The primary mode of failure for FrogBoss occurs when, given a buggy codebase and a task statement describing the problem (similar to a GitHub issue), the model generates a code patch that attempts to fix the bug, but the patch may be incorrect. Users should be aware that the output code patch might not successfully resolve the issue or could introduce new errors.
Acknowledgement
Our training used LLaMA-Factory, an open-source LLM fine-tuning library.
Our model is trained on top of Qwen/Qwen3-32B.
Our model has been optimized for the R2E-Gym's agent scaffolding.
- Downloads last month
- 55