File size: 2,677 Bytes
573035d 3cf1533 573035d 3cf1533 573035d 3cf1533 573035d 3cf1533 573035d 3cf1533 573035d 31273ce 573035d 31273ce 573035d 31273ce 573035d 31273ce 573035d 31273ce |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
---
license: bsd-3-clause
tags:
- Walker2d-v2
- reinforcement-learning
- Soft Actor Critic
- SRL
- deep-reinforcement-learning
model-index:
- name: SAC
results:
- metrics:
- type: FAS (J=1)
value: 0.070768 ± 0.011055
name: FAS
- type: FAS (J=2)
value: 0.083818 ± 0.025049
name: FAS
- type: FAS (J=4)
value: 0.137035 ± 0.042001
name: FAS
- type: FAS (J=8)
value: 0.232737 ± 0.065282
name: FAS
- type: FAS (J=16)
value: 0.150935 ± 0.043573
name: FAS
task:
type: OpenAI Gym
name: OpenAI Gym
dataset:
name: Walker2d-v2
type: Walker2d-v2
Paper: https://arxiv.org/pdf/2410.08979
Code: https://github.com/dee0512/Sequence-Reinforcement-Learning
---
# Soft-Actor-Critic: Walker2d-v2
These are 25 trained models over **seeds (0-4)** and **J = 1, 2, 4, 8, 16** of **Soft actor critic** agent playing **Walker2d-v2** for **[Sequence Reinforcement Learning (SRL)](https://github.com/dee0512/Sequence-Reinforcement-Learning)**.
## Model Sources
**Repository:** [https://github.com/dee0512/Sequence-Reinforcement-Learning](https://github.com/dee0512/Sequence-Reinforcement-Learning)
**Paper (ICLR):** [https://openreview.net/forum?id=w3iM4WLuvy](https://openreview.net/forum?id=w3iM4WLuvy)
**Arxiv:** [arxiv.org/pdf/2410.08979](https://arxiv.org/pdf/2410.08979)
# Training Details:
Using the repository:
```
python .\train_sac.py --env_name <env_name> --seed <seed> --j <j>
```
# Evaluation:
Download the models folder and place it in the same directory as the cloned repository.
Using the repository:
```
python .\eval_sac.py --env_name <env_name> --seed <seed> --j <j>
```
## Metrics:
**FAS:** Frequency Averaged Score
**j:** Action repetition parameter
# Citation
The paper can be cited with the following bibtex entry:
## BibTeX:
```
@inproceedings{DBLP:conf/iclr/PatelS25,
author = {Devdhar Patel and
Hava T. Siegelmann},
title = {Overcoming Slow Decision Frequencies in Continuous Control: Model-Based
Sequence Reinforcement Learning for Model-Free Control},
booktitle = {The Thirteenth International Conference on Learning Representations,
{ICLR} 2025, Singapore, April 24-28, 2025},
publisher = {OpenReview.net},
year = {2025},
url = {https://openreview.net/forum?id=w3iM4WLuvy}
}
```
## APA:
```
Patel, D., & Siegelmann, H. T. Overcoming Slow Decision Frequencies in Continuous Control: Model-Based Sequence Reinforcement Learning for Model-Free Control. In The Thirteenth International Conference on Learning Representations.
```
|