d4rl-pybullet is a dataset library providing continuous-control datasets collected with PyBullet environments.
d3rlpy is the first to support offline deep reinforcement learning algorithms where the algorithm finds the good policy within the given dataset, which is suitable to tasks where online interaction is not feasible.
d3rlpy provides state-of-the-art algorithms through scikit-learn style APIs without compromising flexibility that provides detailed configurations for professional users. Moreoever, d3rlpy is not just designed like scikit-learn, but also fully compatible with scikit-learn utilites.
d3rlpy provides further tweeks such as ensemble algorithms and data augmentation to improve performance of state-of-the-art algorithms potentially beyond their original papers. Therefore, d3rlpy enables every user to achieve professional-level performance just in a few lines of codes.
$ pip install d3rlpy
$ pip install git+https://github.com/takuseno/d4rl-pybullet
import d3rlpy
# prepare dataset
dataset, _ = d3rlpy.datasets.get_pybullet('hopper-bullet-mixed-v0')
# prepare algorithm
cql = d3rlpy.algos.CQL(use_gpu=True)
# start training
cql.fit(dataset.episodes, n_epochs=100)
$ pip install git+https://github.com/takuseno/d4rl-atari
import d3rlpy
# prepare dataset
dataset, _ = d3rlpy.datasets.get_atari('breakout-mixed-v0')
# prepare algorithm
cql = d3rlpy.algos.DiscreteCQL(n_frames=4, scaler='pixel', use_gpu=True)
# start training
cql.fit(dataset.episodes, n_epochs=100)
d4rl-pybullet is a dataset library providing continuous-control datasets collected with PyBullet environments.
d4rl-atari is a dataset library providing Atari datasets released by Google with convenience of automatic dataset management and easy-to-use API.
MINERVA is a GUI tool for offline deep reinforcement learning without any coding.