bytedance / lightseqLinks
LightSeq: A High Performance Library for Sequence Processing and Generation
☆3,277Updated 2 years ago
Alternatives and similar repositories for lightseq
Users that are interested in lightseq are comparing it to the libraries listed below
Sorting:
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,522Updated last month
- Transformer related optimization, including BERT, GPT☆6,173Updated last year
- PyTorch extensions for high performance and large scale training.☆3,322Updated last month
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,078Updated 2 months ago
- Ongoing research training transformer models at scale☆12,428Updated last week
- EasyNLP: A Comprehensive and Easy-to-use NLP Toolkit☆2,132Updated 6 months ago
- Bagua Speeds up PyTorch☆883Updated 9 months ago
- A fast MoE impl for PyTorch☆1,729Updated 3 months ago
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆8,686Updated this week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,771Updated this week
- PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.☆761Updated 2 years ago
- ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab☆2,045Updated last year
- A PyTorch-based knowledge distillation toolkit for natural language processing☆1,658Updated 2 years ago
- Example models using DeepSpeed☆6,503Updated last week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,435Updated last week
- A high performance and generic framework for distributed DNN training☆3,682Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,391Updated last year
- Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.☆3,098Updated last year
- Foundation Architecture for (M)LLMs☆3,076Updated last year
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,665Updated 2 weeks ago
- Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo☆3,063Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4☆824Updated this week
- A primitive library for neural network☆1,343Updated 6 months ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,685Updated 7 months ago
- Fast and memory-efficient exact attention☆17,572Updated last week
- Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL☆544Updated 4 years ago
- EasyTransfer is designed to make the development of transfer learning in NLP applications easier.☆861Updated 2 years ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆871Updated 5 months ago
- Running BERT without Padding☆471Updated 3 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,796Updated last week