---
library_name: transformers
license: apache-2.0
base_model: openai/gpt-oss-20b
tags:
- generated_from_trainer
datasets:
- HuggingFaceH4/Multilingual-Thinking
model-index:
- name: outputs/gpt-oss-out-fft/
results: []
---
[
](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config
axolotl version: `0.12.0.dev0`
```yaml
base_model: openai/gpt-oss-20b
use_kernels: true
model_quantization_config: Mxfp4Config
model_quantization_config_kwargs:
dequantize: true
block_size: 32 # default, matches the OCP spec
strict: false # fallback to fp16 on any odd layers
plugins:
- axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
experimental_skip_move_to_device: true # prevent OOM by NOT putting model to GPU before sharding
datasets:
# - path: winglian/pirate-ultrachat-10k
# type: chat_template
# split: train
- message_property_mappings:
content: content
role: role
path: HuggingFaceH4/Multilingual-Thinking
trust_remote_code: false
field_messages: messages
type: chat_template
dataset_prepared_path: last_run_prepared
val_set_size: 0
output_dir: ./outputs/gpt-oss-out-fft/
sequence_len: 8192
sample_packing: true
# adapter: lora
# lora_r: 8
# lora_alpha: 16
# lora_dropout: 0.0
# lora_target_linear: true
wandb_project: gpt-oss-20b
wandb_name: multilingual-reasoning
gradient_accumulation_steps: 1
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_torch_8bit
lr_scheduler: constant_with_warmup
learning_rate: 2e-5
bf16: true
tf32: true
flash_attention: true
attn_implementation: kernels-community/vllm-flash-attn3
gradient_checkpointing: true
activation_offloading: true
logging_steps: 1
saves_per_epoch: 1
warmup_ratio: 0.1
special_tokens:
eot_tokens:
- "<|end|>"
fsdp_version: 2
fsdp_config:
offload_params: false
state_dict_type: SHARDED_STATE_DICT
auto_wrap_policy: TRANSFORMER_BASED_WRAP
transformer_layer_cls_to_wrap: GptOssDecoderLayer
reshard_after_forward: true
```
# outputs/gpt-oss-out-fft/
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the HuggingFaceH4/Multilingual-Thinking dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- training_steps: 8
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.4