LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
☆406Jul 31, 2025Updated 7 months ago
Alternatives and similar repositories for libai
Users that are interested in libai are comparing it to the libraries listed below
Sorting:
- Models and examples built with OneFlow☆101Oct 16, 2024Updated last year
- Datasets, Transforms and Models specific to Computer Vision☆91Nov 17, 2023Updated 2 years ago
- A toolkit for developers to simplify the transformation of nn.Module instances. It's now corresponding to Pytorch.fx.☆13Apr 7, 2023Updated 2 years ago
- OneFlow->ONNX☆43Apr 19, 2023Updated 2 years ago
- ☆23Apr 25, 2023Updated 2 years ago
- DeepLearning Framework Performance Profiling Toolkit☆295Mar 28, 2022Updated 3 years ago
- OneFlow models for benchmarking.☆104Aug 7, 2024Updated last year
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆9,389Dec 4, 2025Updated 3 months ago
- OneFlow Serving☆20Apr 10, 2025Updated 11 months ago
- ☆13Mar 27, 2023Updated 2 years ago
- A more efficient yolov5 with oneflow backend 🎉🎉🎉☆216Jul 10, 2025Updated 8 months ago
- ☆17Jan 1, 2024Updated 2 years ago
- oneflow documentation☆69Jun 26, 2024Updated last year
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Mar 31, 2023Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆6,397Mar 27, 2024Updated last year
- Ongoing research training transformer models at scale☆15,744Updated this week
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,302May 16, 2023Updated 2 years ago
- ☆16Mar 30, 2024Updated last year
- ☆12Aug 10, 2022Updated 3 years ago
- TVMScript kernel for deformable attention☆25Dec 15, 2021Updated 4 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,864Mar 12, 2026Updated last week
- OneDiff: An out-of-the-box acceleration library for diffusion models.☆1,973Dec 4, 2025Updated 3 months ago
- ☆12Mar 13, 2023Updated 3 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆477Mar 15, 2024Updated 2 years ago
- Running BERT without Padding☆480Mar 18, 2022Updated 4 years ago
- ☆11Dec 26, 2025Updated 2 months ago
- auto deploy neovim like chxuan/vimplus☆12Apr 22, 2025Updated 10 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,233Aug 14, 2025Updated 7 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,077Apr 17, 2024Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,003Sep 19, 2024Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,211Updated this week
- ☆145Jan 30, 2025Updated last year
- Training and serving large-scale neural networks with auto parallelization.☆3,187Dec 9, 2023Updated 2 years ago
- ☆220Aug 17, 2023Updated 2 years ago
- A more efficient GLM implementation!☆54Feb 18, 2023Updated 3 years ago
- ☆78May 4, 2021Updated 4 years ago
- gossip: Efficient Communication Primitives for Multi-GPU Systems☆62Jul 1, 2022Updated 3 years ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆624Oct 27, 2025Updated 4 months ago