hpcaitech / EnergonAILinks
Large-scale model inference.
☆631Updated last year
Alternatives and similar repositories for EnergonAI
Users that are interested in EnergonAI are comparing it to the libraries listed below
Sorting:
- ☆412Updated last year
- Fast Inference Solutions for BLOOM☆563Updated 10 months ago
- Examples of training models with hybrid parallelism using ColossalAI☆340Updated 2 years ago
- ☆546Updated 7 months ago
- ☆220Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆474Updated last year
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆409Updated last week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,406Updated last year
- Microsoft Automatic Mixed Precision Library☆616Updated 10 months ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆604Updated 2 months ago
- GPTQ inference Triton kernel☆303Updated 2 years ago
- PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.☆763Updated 2 years ago
- Efficient AI Inference & Serving☆472Updated last year
- Running BERT without Padding☆472Updated 3 years ago
- LLaMa/RWKV onnx models, quantization and testcase☆363Updated 2 years ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,044Updated last month
- Best practice for training LLaMA models in Megatron-LM☆659Updated last year
- Official repository for LongChat and LongEval☆524Updated last year
- A high-performance inference system for large language models, designed for production environments.☆459Updated 2 weeks ago
- Scalable PaLM implementation of PyTorch☆190Updated 2 years ago
- ☆128Updated 7 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,263Updated 5 months ago
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated 2 years ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆870Updated 11 months ago
- Efficient Inference for Big Models☆585Updated 2 years ago
- Serving multiple LoRA finetuned LLM as one☆1,082Updated last year
- ☆120Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated last year
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek/Kimi-K2/Qwen3 FP8/FP4☆870Updated last week