fidelity / mabwiserLinks
[IJAIT 2021] MABWiser: Contextual Multi-Armed Bandits Library
☆264Updated last year
Alternatives and similar repositories for mabwiser
Users that are interested in mabwiser are comparing it to the libraries listed below
Sorting:
- [AAAI 2024] Mab2Rec: Multi-Armed Bandits Recommender☆156Updated last year
- Python implementations of contextual bandits algorithms☆805Updated 4 months ago
- Multi-Armed Bandit Algorithms Library (MAB)☆133Updated 3 years ago
- A lightweight contextual bandit & reinforcement learning library designed to be used in production Python services.☆69Updated 4 years ago
- 🔬 Research Framework for Single and Multi-Players 🎰 Multi-Arms Bandits (MAB) Algorithms, implementing all the state-of-the-art algorith…☆410Updated last year
- Library of contextual bandits algorithms☆335Updated last year
- Implementations and examples of common offline policy evaluation methods in Python.☆224Updated 2 years ago
- ☆105Updated 4 years ago
- AuctionGym is a simulation environment that enables reproducible evaluation of bandit and reinforcement learning methods for online adver…☆181Updated 4 months ago
- Online Ranking with Multi-Armed-Bandits☆18Updated 4 years ago
- ☆32Updated 8 months ago
- Open Bandit Pipeline: a python library for bandit algorithms and off-policy evaluation☆683Updated last year
- Bandit algorithms simulations for online learning☆88Updated 5 years ago
- UpliftML: A Python Package for Scalable Uplift Modeling☆329Updated 2 years ago
- [ACM 2024] Jurity: Fairness & Evaluation Library