Large-scale model inference.
☆627Sep 12, 2023Updated 2 years ago
Alternatives and similar repositories for EnergonAI
Users that are interested in EnergonAI are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Scalable PaLM implementation of PyTorch☆190Dec 19, 2022Updated 3 years ago
- Examples of training models with hybrid parallelism using ColossalAI☆339Mar 23, 2023Updated 3 years ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆125Nov 27, 2024Updated last year
- Making large AI models cheaper, faster and more accessible☆41,362Mar 16, 2026Updated last week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,101Jun 30, 2025Updated 8 months ago
- Sky Computing: Accelerating Geo-distributed Computing in Federated Learning☆90Nov 22, 2022Updated 3 years ago
- Training and serving large-scale neural networks with auto parallelization.☆3,187Dec 9, 2023Updated 2 years ago
- Optimizing AlphaFold Training and Inference on GPU Clusters☆613Jul 16, 2024Updated last year
- Transformer related optimization, including BERT, GPT☆6,397Mar 27, 2024Updated last year
- Performance benchmarking with ColossalAI☆39Jul 6, 2022Updated 3 years ago
- A collection of models built with ColossalAI☆32Nov 22, 2022Updated 3 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆477Mar 15, 2024Updated 2 years ago
- Running large language models on a single GPU for throughput-oriented scenarios.☆9,379Oct 28, 2024Updated last year
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,709Mar 16, 2026Updated last week
- A memory efficient DLRM training solution using ColossalAI☆107Nov 22, 2022Updated 3 years ago
- ☆462Jun 9, 2024Updated last year
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆81Nov 19, 2024Updated last year
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,958Updated this week
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,864Updated this week
- Efficient Inference for Big Models☆586Jan 24, 2023Updated 3 years ago
- Serving multiple LoRA finetuned LLM as one☆1,148May 8, 2024Updated last year
- Large Language Model Text Generation Inference☆10,812Jan 8, 2026Updated 2 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,231Updated this week
- ☆413Nov 11, 2023Updated 2 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Aug 19, 2024Updated last year
- GPTQ inference Triton kernel☆321May 18, 2023Updated 2 years ago
- Ongoing research training transformer models at scale☆15,744Updated this week
- 4 bits quantization of LLaMA using GPTQ☆3,073Jul 13, 2024Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,722Jun 25, 2024Updated last year
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,542Jul 18, 2025Updated 8 months ago
- Official repository for the paper DynaPipe: Optimizing Multi-task Training through Dynamic Pipelines☆19Dec 8, 2023Updated 2 years ago
- FlashInfer: Kernel Library for LLM Serving☆5,194Updated this week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,625Jul 12, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆8,052Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,373Updated this week
- Efficient AI Inference & Serving☆480Jan 8, 2024Updated 2 years ago
- Torch Distributed Experimental☆117Aug 5, 2024Updated last year
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Jun 25, 2022Updated 3 years ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆791Apr 24, 2023Updated 2 years ago