hpcaitech / EnergonAI
Large-scale model inference.
☆630Updated last year
Related projects ⓘ
Alternatives and complementary repositories for EnergonAI
- Examples of training models with hybrid parallelism using ColossalAI☆336Updated last year
- Fast Inference Solutions for BLOOM☆560Updated last month
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,338Updated 8 months ago
- ☆411Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆457Updated 8 months ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆565Updated 4 months ago
- Official repository for LongChat and LongEval☆512Updated 5 months ago
- Scalable PaLM implementation of PyTorch☆192Updated last year
- ☆527Updated this week
- Microsoft Automatic Mixed Precision Library☆525Updated last month
- ☆209Updated last year
- GPTQ inference Triton kernel☆284Updated last year
- Efficient Inference for Big Models☆571Updated last year
- Best practice for training LLaMA models in Megatron-LM☆628Updated 10 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,150Updated last month
- LOMO: LOw-Memory Optimization☆979Updated 4 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆390Updated last week
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆1,945Updated 7 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆1,908Updated this week
- Efficient AI Inference & Serving☆458Updated 10 months ago
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆649Updated 3 months ago
- PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.☆747Updated last year
- Running BERT without Padding☆460Updated 2 years ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆627Updated 2 months ago
- Crosslingual Generalization through Multitask Finetuning☆516Updated 2 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,258Updated 4 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆208Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆364Updated this week
- LLaMa/RWKV onnx models, quantization and testcase☆354Updated last year
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆736Updated this week