|
--- |
|
license: bsd-3-clause |
|
tags: |
|
- Hopper-v2 |
|
- reinforcement-learning |
|
- Soft Actor Critic |
|
- SRL |
|
- deep-reinforcement-learning |
|
model-index: |
|
- name: SAC |
|
results: |
|
- metrics: |
|
- type: FAS (J=1) |
|
value: 0.050304 ± 0.020365 |
|
name: FAS |
|
- type: FAS (J=2) |
|
value: 0.092501 ± 0.010512 |
|
name: FAS |
|
- type: FAS (J=4) |
|
value: 0.135757 ± 0.030884 |
|
name: FAS |
|
- type: FAS (J=8) |
|
value: 0.141675 ± 0.038575 |
|
name: FAS |
|
- type: FAS (J=16) |
|
value: 0.263203 ± 0.079994 |
|
name: FAS |
|
task: |
|
type: OpenAI Gym |
|
name: OpenAI Gym |
|
dataset: |
|
name: Hopper-v2 |
|
type: Hopper-v2 |
|
Paper: https://arxiv.org/pdf/2410.08979 |
|
Code: https://github.com/dee0512/Sequence-Reinforcement-Learning |
|
--- |
|
# Soft-Actor-Critic: Hopper-v2 |
|
|
|
These are 25 trained models over **seeds (0-4)** and **J = 1, 2, 4, 8, 16** of **Soft actor critic** agent playing **Hopper-v2** for **[Sequence Reinforcement Learning (SRL)](https://github.com/dee0512/Sequence-Reinforcement-Learning)**. |
|
|
|
## Model Sources |
|
|
|
**Repository:** [https://github.com/dee0512/Sequence-Reinforcement-Learning](https://github.com/dee0512/Sequence-Reinforcement-Learning) |
|
**Paper (ICLR):** [https://openreview.net/forum?id=w3iM4WLuvy](https://openreview.net/forum?id=w3iM4WLuvy) |
|
**Arxiv:** [arxiv.org/pdf/2410.08979](https://arxiv.org/pdf/2410.08979) |
|
|
|
# Training Details: |
|
Using the repository: |
|
|
|
``` |
|
python .\train_sac.py --env_name <env_name> --seed <seed> --j <j> |
|
``` |
|
|
|
# Evaluation: |
|
|
|
Download the models folder and place it in the same directory as the cloned repository. |
|
Using the repository: |
|
|
|
``` |
|
python .\eval_sac.py --env_name <env_name> --seed <seed> --j <j> |
|
``` |
|
|
|
## Metrics: |
|
|
|
**FAS:** Frequency Averaged Score |
|
**j:** Action repetition parameter |
|
|
|
|
|
# Citation |
|
|
|
The paper can be cited with the following bibtex entry: |
|
|
|
## BibTeX: |
|
|
|
``` |
|
@inproceedings{DBLP:conf/iclr/PatelS25, |
|
author = {Devdhar Patel and |
|
Hava T. Siegelmann}, |
|
title = {Overcoming Slow Decision Frequencies in Continuous Control: Model-Based |
|
Sequence Reinforcement Learning for Model-Free Control}, |
|
booktitle = {The Thirteenth International Conference on Learning Representations, |
|
{ICLR} 2025, Singapore, April 24-28, 2025}, |
|
publisher = {OpenReview.net}, |
|
year = {2025}, |
|
url = {https://openreview.net/forum?id=w3iM4WLuvy} |
|
} |
|
``` |
|
|
|
## APA: |
|
``` |
|
Patel, D., & Siegelmann, H. T. Overcoming Slow Decision Frequencies in Continuous Control: Model-Based Sequence Reinforcement Learning for Model-Free Control. In The Thirteenth International Conference on Learning Representations. |
|
``` |