thu-pacman / FasterMoELinks
☆87Updated 3 years ago
Alternatives and similar repositories for FasterMoE
Users that are interested in FasterMoE are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆93Updated 2 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆217Updated last year
- nnScaler: Compiling DNN models for Parallel Training☆118Updated last month
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆222Updated 2 years ago
- ATC23 AE☆47Updated 2 years ago
- ☆77Updated 4 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆67Updated 7 months ago
- A resilient distributed training framework☆96Updated last year
- ☆58Updated last year
- [ICML 2024] Serving LLMs on heterogeneous decentralized clusters.☆31Updated last year
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆125Updated 2 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆126Updated 5 months ago
- ☆43Updated last year
- ☆79Updated 6 months ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆123Updated last year
- ☆124Updated last year
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆63Updated last year
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆48Updated last year
- ☆75Updated 3 weeks ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆346Updated 4 months ago
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆257Updated 3 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆164Updated last month
- ☆158Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆190Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆169Updated last year
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆278Updated 8 months ago
- ☆60Updated 11 months ago
- [NeurIPS 2024] The official implementation of "Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exitin…☆61Updated last year
- Zero Bubble Pipeline Parallelism☆433Updated 6 months ago
- Stateful LLM Serving☆88Updated 8 months ago