LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
☆405Jul 31, 2025Updated 7 months ago
Alternatives and similar repositories for libai
Users that are interested in libai are comparing it to the libraries listed below
Sorting:
- Models and examples built with OneFlow☆101Oct 16, 2024Updated last year
- Datasets, Transforms and Models specific to Computer Vision☆91Nov 17, 2023Updated 2 years ago
- A toolkit for developers to simplify the transformation of nn.Module instances. It's now corresponding to Pytorch.fx.☆13Apr 7, 2023Updated 2 years ago
- OneFlow->ONNX☆43Apr 19, 2023Updated 2 years ago
- DeepLearning Framework Performance Profiling Toolkit☆296Mar 28, 2022Updated 3 years ago
- ☆23Apr 25, 2023Updated 2 years ago
- OneFlow models for benchmarking.☆104Aug 7, 2024Updated last year
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆9,391Dec 4, 2025Updated 2 months ago
- TVMScript kernel for deformable attention☆25Dec 15, 2021Updated 4 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Mar 31, 2023Updated 2 years ago
- OneFlow Serving☆21Apr 10, 2025Updated 10 months ago
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,303May 16, 2023Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆6,394Mar 27, 2024Updated last year
- ☆13Mar 27, 2023Updated 2 years ago
- Ongoing research training transformer models at scale☆15,242Feb 21, 2026Updated last week
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,861Feb 20, 2026Updated last week
- A more efficient yolov5 with oneflow backend 🎉🎉🎉☆217Jul 10, 2025Updated 7 months ago
- oneflow documentation☆69Jun 26, 2024Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆478Mar 15, 2024Updated last year
- ☆17Jan 1, 2024Updated 2 years ago
- Running BERT without Padding☆480Mar 18, 2022Updated 3 years ago
- ☆12Mar 13, 2023Updated 2 years ago
- ☆145Jan 30, 2025Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,229Aug 14, 2025Updated 6 months ago
- ☆16Mar 30, 2024Updated last year
- OneDiff: An out-of-the-box acceleration library for diffusion models.☆1,970Dec 4, 2025Updated 2 months ago
- Depict GPU memory footprint during DNN training of PyTorch☆11Nov 17, 2022Updated 3 years ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,006Sep 19, 2024Updated last year
- Efficient Training (including pre-training and fine-tuning) for Big Models☆621Oct 27, 2025Updated 4 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,075Apr 17, 2024Updated last year
- ☆23Aug 21, 2025Updated 6 months ago
- Training and serving large-scale neural networks with auto parallelization.☆3,183Dec 9, 2023Updated 2 years ago
- A fast MoE impl for PyTorch☆1,840Feb 10, 2025Updated last year
- ☆219Aug 17, 2023Updated 2 years ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,170Feb 21, 2026Updated last week
- ☆11Dec 26, 2025Updated 2 months ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆965Dec 21, 2025Updated 2 months ago
- ☆12Aug 10, 2022Updated 3 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year