bytedance / lightseqLinks
LightSeq: A High Performance Library for Sequence Processing and Generation
☆3,301Updated 2 years ago
Alternatives and similar repositories for lightseq
Users that are interested in lightseq are comparing it to the libraries listed below
Sorting:
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,534Updated 5 months ago
- Transformer related optimization, including BERT, GPT☆6,370Updated last year
- PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.☆773Updated last month
- A fast MoE impl for PyTorch☆1,825Updated 10 months ago
- PyTorch extensions for high performance and large scale training.☆3,390Updated 7 months ago
- Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.☆3,152Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,206Updated 4 months ago
- Bagua Speeds up PyTorch☆881Updated last year
- A high performance and generic framework for distributed DNN training☆3,713Updated 2 years ago
- A PyTorch-based knowledge distillation toolkit for natural language processing☆1,689Updated 2 years ago
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆9,377Updated 2 weeks ago
- EasyNLP: A Comprehensive and Easy-to-use NLP Toolkit☆2,179Updated last year
- A primitive library for neural network☆1,369Updated last year
- Training and serving large-scale neural networks with auto parallelization.☆3,171Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,426Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,007Updated this week
- Several simple examples for popular neural network toolkits calling custom CUDA operators.☆1,523Updated 4 years ago
- Jittor is a high-performance deep learning framework based on JIT compiling and meta-operators.☆3,220Updated 4 months ago
- real Transformer TeraFLOPS on various GPUs☆915Updated last year
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆910Updated 11 months ago
- Boosting your Web Services of Deep Learning Applications.☆1,245Updated 4 years ago
- Foundation Architecture for (M)LLMs☆3,126Updated last year
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆406Updated 4 months ago
- ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab☆2,047Updated last year
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆955Updated 8 months ago
- Ongoing research training transformer models at scale☆14,602Updated this week
- Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL☆545Updated 5 years ago
- EasyTransfer is designed to make the development of transfer learning in NLP applications easier.☆863Updated 3 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆950Updated last week
- Running BERT without Padding☆476Updated 3 years ago