jldbc / banditsLinks
Multi-Armed Bandit algorithms applied to the MovieLens 20M dataset
☆56Updated 4 years ago
Alternatives and similar repositories for bandits
Users that are interested in bandits are comparing it to the libraries listed below
Sorting:
- Stream Data based News Recommendation - Contextual Bandit Approach☆48Updated 7 years ago
- https://sites.google.com/cornell.edu/recsys2021tutorial☆55Updated 3 years ago
- Bandit algorithms simulations for online learning☆86Updated 5 years ago
- Predict and recommend the news articles, user is most likely to click in real time.☆32Updated 7 years ago
- ☆36Updated 5 years ago
- Multi Armed Bandits implementation using the Yahoo! Front Page Today Module User Click Log Dataset☆101Updated 3 years ago
- Source code and data from the RecSys 2020 article "Carousel Personalization in Music Streaming Apps with Contextual Bandits" by W. Bendad…☆56Updated 4 years ago
- A lightweight contextual bandit & reinforcement learning library designed to be used in production Python services.☆67Updated 4 years ago
- ☆51Updated last year
- Implementing LinUCB and HybridLinUCB in Python.☆50Updated 7 years ago
- Multi-armed bandits for dynamic movie recommendations☆14Updated 5 years ago
- ☆105Updated 3 years ago
- Contextual bandit in python☆114Updated 3 years ago
- Big Data's open seminars: An Interactive Introduction to Reinforcement Learning☆64Updated 4 years ago
- ☆50Updated 4 years ago
- Offline evaluation of multi-armed bandit algorithms☆23Updated 4 years ago
- Accompanying code for reproducing experiments from the HybridSVD paper. Preprint is available at https://arxiv.org/abs/1802.06398.☆25Updated 5 years ago
- In this notebook several classes of multi-armed bandits are implemented. This includes epsilon greedy, UCB, Linear UCB (Contextual bandit…☆87Updated 4 years ago
- Building recommender Systems using contextual bandit methods to address cold-start issue and online real-time learning☆11Updated 3 years ago
- A toolkit of Reinforcement Learning based Recommendation (RL4Rec)☆23Updated 3 years ago
- Multi-Armed Bandit Algorithms Library (MAB)☆133Updated 2 years ago
- working example of a contextual multi-armed bandit☆55Updated 5 years ago
- Uplifted Contextual Multi-Armed Bandit☆19Updated 3 years ago
- (RecSys2020) "Doubly Robust Estimator for Ranking Metrics with Post-Click Conversions"☆24Updated 2 years ago
- Source code for our paper "Pessimistic Decision-Making for Recommender Systems" published at ACM TORS, and RecSys 2021.☆11Updated 2 years ago
- No Regrets: A deep dive comparison of bandits and A/B testing☆47Updated 7 years ago
- ☆87Updated last year
- ☆49Updated 2 years ago
- Thompson Sampling Tutorial☆53Updated 6 years ago
- RecSim NG: Toward Principled Uncertainty Modeling for Recommender Ecosystems☆119Updated 3 years ago