[IJAIT 2021] MABWiser: Contextual Multi-Armed Bandits Library
☆279Sep 5, 2024Updated last year
Alternatives and similar repositories for mabwiser
Users that are interested in mabwiser are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [AAAI 2021] TextWiser: Text Featurization Library☆58Feb 5, 2026Updated last month
- [AAAI 2024] Mab2Rec: Multi-Armed Bandits Recommender☆158Oct 11, 2024Updated last year
- [ACM 2024] Jurity: Fairness & Evaluation Library☆57Oct 11, 2024Updated last year
- [AAAI 2022] Seq2Pat: Sequence-to-Pattern Generation Library☆135Dec 3, 2024Updated last year
- BoolXAI is a research library for machine learning for Explainable AI (XAI) based on expressive Boolean formulas.☆20Oct 10, 2025Updated 5 months ago
- Python implementations of contextual bandits algorithms☆824Feb 22, 2026Updated last month
- A lightweight contextual bandit & reinforcement learning library designed to be used in production Python services.☆71Jun 4, 2021Updated 4 years ago
- Online Ranking with Multi-Armed-Bandits☆19Sep 4, 2021Updated 4 years ago
- Multi-Armed Bandit Algorithms Library (MAB)☆135Sep 6, 2022Updated 3 years ago
- 🔬 Research Framework for Single and Multi-Players 🎰 Multi-Arms Bandits (MAB) Algorithms, implementing all the state-of-the-art algorith…☆420Apr 30, 2024Updated last year
- Open Bandit Pipeline: a python library for bandit algorithms and off-policy evaluation☆697Jun 3, 2024Updated last year
- Library of contextual bandits algorithms☆341Mar 14, 2024Updated 2 years ago
- ☆106Sep 13, 2021Updated 4 years ago
- Contextual bandit in python☆112Jul 7, 2021Updated 4 years ago
- In this notebook several classes of multi-armed bandits are implemented. This includes epsilon greedy, UCB, Linear UCB (Contextual bandit…☆90Dec 10, 2020Updated 5 years ago
- Python library for Multi-Armed Bandits☆768Feb 11, 2020Updated 6 years ago
- ☆22Sep 9, 2015Updated 10 years ago
- Estimators to perform off-policy evaluation☆13Sep 3, 2024Updated last year
- Multi-Armed Bandit algorithms applied to the MovieLens 20M dataset☆57Aug 9, 2020Updated 5 years ago
- Contains Code for Contextual Bandits Decision Tree☆21Jun 11, 2019Updated 6 years ago
- A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-…☆66Jul 31, 2023Updated 2 years ago
- R package for Multi-Armed Bandit Simulation Study☆38Aug 18, 2017Updated 8 years ago
- ☆37Jul 8, 2019Updated 6 years ago
- Implementations and examples of common offline policy evaluation methods in Python.☆224Feb 11, 2023Updated 3 years ago
- scripts for evaluation of contextual bandit algorithms☆45Apr 27, 2020Updated 5 years ago
- A Python 3 Bandit Visualization Package☆11Oct 16, 2017Updated 8 years ago
- Study NeuralUCB and regret analysis for contextual bandit with neural decision☆101Dec 14, 2021Updated 4 years ago
- ☆25Apr 29, 2023Updated 2 years ago
- An official JAX-based code for our NeuraLCB paper, "Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization", ICLR…☆13Mar 13, 2022Updated 4 years ago
- Library for multi-armed bandit selection strategies, including efficient deterministic implementations of Thompson sampling and epsilon-g…☆66Mar 7, 2026Updated 2 weeks ago
- ☆25Oct 22, 2024Updated last year
- Predict and recommend the news articles, user is most likely to click in real time.☆32Apr 3, 2018Updated 7 years ago
- Contextual bandit benchmarking☆53Jan 21, 2026Updated 2 months ago
- Big Data's open seminars: An Interactive Introduction to Reinforcement Learning☆63Jun 7, 2021Updated 4 years ago
- Python application to setup and run streaming (contextual) bandit experiments.☆84Sep 4, 2025Updated 6 months ago
- spock is a framework that helps manage complex parameter configurations during research and development of Python applications☆142Nov 3, 2023Updated 2 years ago
- Based on Thompson sampling with the online bootstrap (Dean Eckles, Maurits Kaptein). http://arxiv.org/abs/1410.4009☆11Dec 30, 2014Updated 11 years ago
- Contextual Bandits in R - simulation and evaluation of Multi-Armed Bandit Policies☆80Jul 25, 2020Updated 5 years ago
- https://sites.google.com/cornell.edu/recsys2021tutorial☆58Mar 21, 2022Updated 4 years ago