iurteaga / bandits
Public repository for the work on bandit problems
☆23Updated last year
Alternatives and similar repositories for bandits:
Users that are interested in bandits are comparing it to the libraries listed below
- Non stationary bandit for experiments with Reinforcement Learning☆34Updated 8 years ago
- Library for Multi-Armed Bandit Algorithms☆57Updated 8 years ago
- Non-stationary Off-policy Evaluation☆13Updated 6 years ago
- Code for doubly stochastic gradients☆25Updated 10 years ago
- Experimentation for oracle based contextual bandit algorithms.☆31Updated 2 years ago
- Implementation of the X-armed Bandits algorithm, as detailed in the paper, "X-armed Bandits", Bubeck et al., 2011.☆9Updated 6 years ago
- Code for "Best arm identification in multi-armed bandits with delayed feedback", AISTATS 2018.☆19Updated 7 years ago
- An extension to Sacred for automated hyperparameter optimization.☆59Updated 7 years ago
- ☆16Updated 6 years ago
- Contextual Bandit Algorithms (+Bandit Algorithms)☆22Updated 5 years ago
- Contextual bandit in python☆111Updated 3 years ago
- Python implementation of projection losses.☆25Updated 5 years ago
- Repository of models in Pyro☆29Updated 9 months ago
- Semi-synthetic experiments to test several approaches for off-policy evaluation and optimization of slate recommenders.☆43Updated 7 years ago
- Collaborative filtering with the GP-LVM☆25Updated 9 years ago
- DQV-Learning: a novel faster synchronous Deep Reinforcement Learning algorithm