LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training
☆405Jul 31, 2025Updated 9 months ago
Alternatives and similar repositories for libai
Users that are interested in libai are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Models and examples built with OneFlow☆101Oct 16, 2024Updated last year
- Datasets, Transforms and Models specific to Computer Vision☆91Nov 17, 2023Updated 2 years ago
- A toolkit for developers to simplify the transformation of nn.Module instances. It's now corresponding to Pytorch.fx.☆13Apr 7, 2023Updated 3 years ago
- OneFlow->ONNX☆42Apr 19, 2023Updated 3 years ago
- ☆23Apr 25, 2023Updated 3 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- DeepLearning Framework Performance Profiling Toolkit☆295Mar 28, 2022Updated 4 years ago
- OneFlow models for benchmarking.☆104Aug 7, 2024Updated last year
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆9,392Dec 4, 2025Updated 4 months ago
- OneFlow Serving☆20Apr 10, 2025Updated last year
- ☆13Mar 27, 2023Updated 3 years ago
- A more efficient yolov5 with oneflow backend 🎉🎉🎉☆214Jul 10, 2025Updated 9 months ago
- ☆17Jan 1, 2024Updated 2 years ago
- oneflow documentation☆69Jun 26, 2024Updated last year
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Mar 31, 2023Updated 3 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Transformer related optimization, including BERT, GPT☆6,412Mar 27, 2024Updated 2 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,873Updated this week
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,300May 16, 2023Updated 2 years ago
- Ongoing research training transformer models at scale☆16,145Updated this week
- ☆16Mar 30, 2024Updated 2 years ago
- ☆12Aug 10, 2022Updated 3 years ago
- TVMScript kernel for deformable attention☆25Dec 15, 2021Updated 4 years ago
- OneDiff: An out-of-the-box acceleration library for diffusion models.☆1,972Dec 4, 2025Updated 4 months ago
- ☆12Mar 13, 2023Updated 3 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆479Mar 15, 2024Updated 2 years ago
- Running BERT without Padding☆479Mar 18, 2022Updated 4 years ago
- ☆11Dec 26, 2025Updated 4 months ago
- auto deploy neovim like chxuan/vimplus☆12Apr 22, 2025Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,247Aug 14, 2025Updated 8 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,076Apr 17, 2024Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆17Jun 3, 2024Updated last year
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,000Sep 19, 2024Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,291Updated this week
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆144Jan 30, 2025Updated last year
- Training and serving large-scale neural networks with auto parallelization.☆3,186Dec 9, 2023Updated 2 years ago
- ☆220Aug 17, 2023Updated 2 years ago
- A more efficient GLM implementation!☆54Feb 18, 2023Updated 3 years ago
- ☆78May 4, 2021Updated 4 years ago
- gossip: Efficient Communication Primitives for Multi-GPU Systems☆62Jul 1, 2022Updated 3 years ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆624Apr 23, 2026Updated last week