hpcaitech / EnergonAI
Large-scale model inference.
☆628Updated last year
Alternatives and similar repositories for EnergonAI:
Users that are interested in EnergonAI are comparing it to the libraries listed below
- Fast Inference Solutions for BLOOM☆563Updated 3 months ago
- Examples of training models with hybrid parallelism using ColossalAI☆337Updated last year
- ☆411Updated last year
- Scalable PaLM implementation of PyTorch☆192Updated 2 years ago
- ☆537Updated last month
- Microsoft Automatic Mixed Precision Library☆549Updated 3 months ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆574Updated 5 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆467Updated 10 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,354Updated 9 months ago
- Running BERT without Padding☆468Updated 2 years ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆397Updated 2 months ago
- GPTQ inference Triton kernel☆291Updated last year
- ☆211Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆1,998Updated 9 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,957Updated 3 weeks ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,179Updated 3 months ago
- LOMO: LOw-Memory Optimization☆979Updated 6 months ago
- Official repository for LongChat and LongEval☆518Updated 7 months ago
- Efficient Inference for Big Models☆574Updated last year
- Automatically split your PyTorch models on multiple GPUs for training & inference☆643Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆1,942Updated last month
- Best practice for training LLaMA models in Megatron-LM☆638Updated last year
- Efficient AI Inference & Serving☆462Updated last year
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,549Updated 11 months ago
- PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.☆754Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆400Updated 2 weeks ago
- ☆127Updated 3 weeks ago
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,615Updated last year
- LLaMa/RWKV onnx models, quantization and testcase☆356Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆667Updated 5 months ago