Vincentzyx / Douzero_Cloud_ClientLinks
Cloud client for douzero training
☆11Updated 4 years ago
Alternatives and similar repositories for Douzero_Cloud_Client
Users that are interested in Douzero_Cloud_Client are comparing it to the libraries listed below
Sorting:
- ☆45Updated 3 years ago
- C++/python fight the lord with pybind11 (强化学习AI斗地主), Accepted to AIIDE-2020☆163Updated 4 years ago
- Douzero with ResNet and GPU support for Windows☆47Updated 4 years ago
- A Deep Reinforcment Learning Aproach to Texas Holdem☆36Updated 3 years ago
- [NeurIPS 2022] PerfectDou: Dominating DouDizhu with Perfect Information Distillation☆206Updated last year
- DeeCamp 2019 最佳团队 斗地主出牌引擎☆14Updated 4 years ago
- Reinforcement learning algorithms to play Poker☆14Updated 4 years ago
- Deep Reinforcement Learning for Multiplayer Online Battle Arena☆90Updated 2 years ago
- A PyTorch implementation of SEED, originally created by Google Research for TensorFlow 2.☆15Updated 5 years ago
- Python environment for Chinese Standard Mahjong on Botzone platform.☆14Updated 5 years ago
- Python and R tutorial for RLCard in Jupyter Notebook☆97Updated 3 years ago
- Yanglegeyang AI☆25Updated 3 years ago
- ☆12Updated 3 years ago
- Learn to play Sekiro with reinforcement learning.☆17Updated 3 years ago
- A platform for intelligent agent learning based on a 3D open-world FPS game developed by Inspir.AI.☆61Updated 3 years ago
- Poker Bot with Deep Learning☆31Updated last year
- ☆56Updated 2 years ago
- MACTA: A Multi-agent Reinforcement Learning Approach for Cache Timing Attacks and Detection☆46Updated 2 years ago
- A C++ pytorch implementation of MuZero☆40Updated last year
- A Doudizhu reinforcement learning AI☆44Updated 8 months ago
- A distributed GPU-centric experience replay system for large AI models.☆18Updated 2 years ago
- A clean and easy implementation of MuZero, AlphaZero and Self-Play reinforcement learning algorithms for any game.☆17Updated last year
- ☆13Updated 4 years ago
- ☆24Updated 3 years ago
- An attempt at a Python implementation of Pluribus, a No-Limits Hold'em Poker Bot☆107Updated 5 years ago
- The ultimate goal is: Create an AI like alphaZero in LOL☆43Updated 2 years ago
- Leaderboard and Visualization for RLCard☆402Updated 2 years ago
- ☆92Updated last year
- (TG'2023) Official code for the paper "Revisiting of AlphaStar" (previously called "Rethinking of AlphaStar"). It compares the raw interf…☆10Updated 4 years ago
- [AutoML'22] Bayesian Generational Population-based Training (BG-PBT)☆29Updated 3 years ago