Reinforcement Learning
Safetensors
English
qwen2
reward-modeling

Model Card for Model ID

This is a reward model fine tuned from Qwen/Qwen2.5-0.5B-Instruct to train on AIF Gen piecewise preference shift dataset via TRL.

Model Details

Model Description

The training is done on 8 a100 GPUs for one epoch using full fine-tuning.

Uses

This model is trained to be used for benchmarking RLHF methods in static and Lifelong learning scenarios. TODO: link the paper.

Direct Use

Refer to Uses.

Out-of-Scope Use

As mentioned in AIF-Gen datasets as well, please be aware of the hallucinations in the synthetic data if you use this reward model to train agents for deployment.

Bias, Risks, and Limitations

The only risk is mentioned in the Out-of-Scope Use.

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

  • Training regime: [More Information Needed]

Speeds, Sizes, Times [optional]

[More Information Needed]

Evaluation

Testing Data, Factors & Metrics

Testing Data

[More Information Needed]

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Summary

Model Examination [optional]

[More Information Needed]

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

[More Information Needed]

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Model Card Authors [optional]

[More Information Needed]

Model Card Contact

[More Information Needed]

Downloads last month
3
Safetensors
Model size
494M params
Tensor type
BF16
·
Video Preview
loading

Model tree for LifelongAlignment/aifgen-piecewise-preference-shift-0-reward-model

Base model

Qwen/Qwen2.5-0.5B
Finetuned
(339)
this model

Dataset used to train LifelongAlignment/aifgen-piecewise-preference-shift-0-reward-model