Explore Inter-layer Expert Affinity in MoE Model Inference
☆16May 6, 2024Updated last year
Alternatives and similar repositories for ExFlow
Users that are interested in ExFlow are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Code Repository for the NeurIPS 2024 Paper "Toward Efficient Inference for Mixture of Experts".☆19Oct 30, 2024Updated last year
- Code release for AdapMoE accepted by ICCAD 2024☆38Apr 28, 2025Updated last year
- ☆59May 4, 2024Updated last year
- Official code for the paper "HEXA-MoE: Efficient and Heterogeneous-Aware MoE Acceleration with Zero Computation Redundancy"☆15Mar 6, 2025Updated last year
- ☆29May 24, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ATC23 AE☆45May 11, 2023Updated 2 years ago
- Battleship environment for reinforcement learning tasks☆14Apr 29, 2023Updated 3 years ago
- This repository presents the source code for the paper "MILLION: Mastering Long-Context LLM Inference Via Outlier-Immunized KV Product Qu…☆23Apr 2, 2025Updated last year
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆265Nov 18, 2024Updated last year
- [ICLR 2025] DGQ: Distribution-Aware Group Quantization for Text-to-Image Diffusion Models☆19Mar 25, 2025Updated last year
- Incremental Mobile User Profiling: Reinforcement Learning with Spatial Knowledge Graph for Modeling Event Streams☆15Jul 25, 2024Updated last year
- ☆21Feb 10, 2025Updated last year
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆23Nov 11, 2025Updated 5 months ago
- Inference framework for MoE layers based on TensorRT with Python binding☆41May 31, 2021Updated 4 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆43Aug 14, 2024Updated last year
- High-performance GEMM implementation optimized for NVIDIA H100 GPUs, leveraging Hopper architecture's TMA, WGMMA, and Thread Block Cluste…☆10Dec 4, 2024Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆302Apr 17, 2026Updated last week
- ASIC simulation of Multi-ported Memory Module. And it can offer SRAM-based dual-port basic building block to support multiple read/write …☆23May 30, 2016Updated 9 years ago
- Tutorial Exercises and Code for GPU Communications Tutorial at HOT Interconnects 2025☆31Oct 22, 2025Updated 6 months ago
- ☆15Apr 11, 2024Updated 2 years ago
- MISO: Exploiting Multi-Instance GPU Capability on Multi-Tenant GPU Clusters☆21Apr 21, 2023Updated 3 years ago
- ☆311Jul 10, 2025Updated 9 months ago
- ☆91Apr 2, 2022Updated 4 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Curated collection of papers in MoE model inference☆381Mar 12, 2026Updated last month
- [NSDI25] AutoCCL: Automated Collective Communication Tuning for Accelerating Distributed and Parallel DNN Training☆31May 2, 2025Updated 11 months ago
- Code for MLSys 2024 Paper "SiDA-MoE: Sparsity-Inspired Data-Aware Serving for Efficient and Scalable Large Mixture-of-Experts Models"☆22Apr 13, 2024Updated 2 years ago
- The code based on vLLM for the paper “ Cost-Efficient Large Language Model Serving for Multi-turn Conversations with CachedAttention”.☆11Sep 19, 2024Updated last year
- Compiler for Dynamic Neural Networks☆45Nov 13, 2023Updated 2 years ago
- STREAMer: Benchmarking remote volatile and non-volatile memory bandwidth☆18Aug 21, 2023Updated 2 years ago
- JAX Scalify: end-to-end scaled arithmetics☆18Oct 30, 2024Updated last year
- An experimentation platform for LLM inference optimisation☆36Sep 19, 2024Updated last year
- linux 内核技术文档☆16Feb 26, 2026Updated 2 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- Visualize expert firing frequencies across sentences in the Mixtral MoE model☆18Dec 22, 2023Updated 2 years ago
- ☆21Jun 9, 2025Updated 10 months ago
- ☆17Apr 15, 2025Updated last year
- The pmem.io Website☆17Jan 20, 2026Updated 3 months ago
- Minimalist RL for Diffusion LLMs with SOTA reasoning performance (89.1% GSM8K). Official implementation of "The Flexibility Trap".☆134Apr 3, 2026Updated 3 weeks ago
- This module collects per-page stats and decide for each page if it should be migrated, replicated or interleaved.☆17Sep 29, 2015Updated 10 years ago
- Adaptive Message Quantization and Parallelization for Distributed Full-graph GNN Training☆24Mar 1, 2024Updated 2 years ago