david-cortes / contextualbanditsView external linksLinks
Python implementations of contextual bandits algorithms
☆820Jan 14, 2026Updated last month
Alternatives and similar repositories for contextualbandits
Users that are interested in contextualbandits are comparing it to the libraries listed below
Sorting:
- Library of contextual bandits algorithms☆339Mar 14, 2024Updated last year
- Contextual bandit in python☆112Jul 7, 2021Updated 4 years ago
- [IJAIT 2021] MABWiser: Contextual Multi-Armed Bandits Library☆280Sep 5, 2024Updated last year
- Python library for Multi-Armed Bandits☆766Feb 11, 2020Updated 6 years ago
- Open Bandit Pipeline: a python library for bandit algorithms and off-policy evaluation☆691Jun 3, 2024Updated last year
- 🔬 Research Framework for Single and Multi-Players 🎰 Multi-Arms Bandits (MAB) Algorithms, implementing all the state-of-the-art algorith…☆418Apr 30, 2024Updated last year
- A lightweight contextual bandit & reinforcement learning library designed to be used in production Python services.☆71Jun 4, 2021Updated 4 years ago
- Multi-Armed Bandit Algorithms Library (MAB)☆135Sep 6, 2022Updated 3 years ago
- Predict and recommend the news articles, user is most likely to click in real time.☆32Apr 3, 2018Updated 7 years ago
- Stream Data based News Recommendation - Contextual Bandit Approach☆47Nov 15, 2017Updated 8 years ago
- Bandit algorithms simulations for online learning☆88May 13, 2020Updated 5 years ago
- Python application to setup and run streaming (contextual) bandit experiments.☆83Sep 4, 2025Updated 5 months ago
- Code for my book on Multi-Armed Bandit Algorithms☆920Jan 9, 2020Updated 6 years ago
- working example of a contextual multi-armed bandit☆55Sep 3, 2019Updated 6 years ago
- Big Data's open seminars: An Interactive Introduction to Reinforcement Learning☆63Jun 7, 2021Updated 4 years ago
- ☆106Sep 13, 2021Updated 4 years ago
- Implementations and examples of common offline policy evaluation methods in Python.☆224Feb 11, 2023Updated 3 years ago
- Code for reco-gym: A Reinforcement Learning Environment for the problem of Product Recommendation in Online Advertising☆480Jul 9, 2021Updated 4 years ago
- Contextual Bandits in R - simulation and evaluation of Multi-Armed Bandit Policies☆80Jul 25, 2020Updated 5 years ago
- scripts for evaluation of contextual bandit algorithms☆45Apr 27, 2020Updated 5 years ago
- Study NeuralUCB and regret analysis for contextual bandit with neural decision☆99Dec 14, 2021Updated 4 years ago
- Yahoo! news article recommendation system by linUCB☆111Feb 1, 2018Updated 8 years ago
- Source code for our paper "Top-K Contextual Bandits with Equity of Exposure" published at RecSys 2021.☆15Aug 2, 2021Updated 4 years ago
- https://sites.google.com/cornell.edu/recsys2021tutorial☆58Mar 21, 2022Updated 3 years ago
- Contextual bandit benchmarking☆53Jan 21, 2026Updated 3 weeks ago
- Source code and data from the RecSys 2020 article "Carousel Personalization in Music Streaming Apps with Contextual Bandits" by W. Bendad…☆59Sep 30, 2020Updated 5 years ago
- Online Ranking with Multi-Armed-Bandits☆19Sep 4, 2021Updated 4 years ago
- ☆369Aug 12, 2020Updated 5 years ago
- Source code for our paper "Joint Policy-Value Learning for Recommendation" published at KDD 2020.☆23Jul 6, 2023Updated 2 years ago
- Estimators to perform off-policy evaluation☆13Sep 3, 2024Updated last year
- CausalLift: Python package for causality-based Uplift Modeling in real-world business☆352May 13, 2023Updated 2 years ago
- Implementing LinUCB and HybridLinUCB in Python.☆49May 15, 2018Updated 7 years ago
- Experimentation for oracle based contextual bandit algorithms.☆33Sep 12, 2022Updated 3 years ago
- A platform for Reasoning systems (Reinforcement Learning, Contextual Bandits, etc.)☆3,682Updated this week
- ☆15Dec 14, 2020Updated 5 years ago
- ☆20Mar 15, 2017Updated 8 years ago
- (RecSys2020) "Doubly Robust Estimator for Ranking Metrics with Post-Click Conversions"☆24Mar 25, 2023Updated 2 years ago
- Python implementation of 'Scalable Recommendation with Hierarchical Poisson Factorization'.☆79May 9, 2025Updated 9 months ago
- No Regrets: A deep dive comparison of bandits and A/B testing☆47Feb 17, 2018Updated 7 years ago