☆58May 4, 2024Updated last year
Alternatives and similar repositories for Pregated_MoE
Users that are interested in Pregated_MoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Open-source of LazyDP published in ASPLOS-2024☆22May 5, 2024Updated last year
- ☆26Dec 3, 2025Updated 3 months ago
- ☆83May 27, 2025Updated 10 months ago
- ☆21Nov 27, 2025Updated 4 months ago
- GPU-based Distributed Point Functions (DPF) and 2-server private information retrieval (PIR).☆56Jan 27, 2023Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ☆16Dec 4, 2025Updated 3 months ago
- ☆169Feb 1, 2025Updated last year
- [ICML‘25] Official code for paper "Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training an…☆13Apr 17, 2025Updated 11 months ago
- Code release for AdapMoE accepted by ICCAD 2024☆36Apr 28, 2025Updated 10 months ago
- Explore Inter-layer Expert Affinity in MoE Model Inference☆16May 6, 2024Updated last year
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆31Jul 4, 2024Updated last year
- Sirius, an efficient correction mechanism, which significantly boosts Contextual Sparsity models on reasoning tasks while maintaining its…☆21Sep 10, 2024Updated last year
- ☆131Nov 11, 2024Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆289Updated this week
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- ☆15Apr 11, 2024Updated last year
- Official code for the paper "HEXA-MoE: Efficient and Heterogeneous-Aware MoE Acceleration with Zero Computation Redundancy"☆15Mar 6, 2025Updated last year
- This repository presents the source code for the paper "MILLION: Mastering Long-Context LLM Inference Via Outlier-Immunized KV Product Qu…☆23Apr 2, 2025Updated 11 months ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆16Feb 4, 2025Updated last year
- Official PyTorch implementation of CD-MOE☆12Mar 18, 2026Updated last week
- STREAMer: Benchmarking remote volatile and non-volatile memory bandwidth☆17Aug 21, 2023Updated 2 years ago
- Curated collection of papers in MoE model inference☆361Mar 12, 2026Updated 2 weeks ago
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated last year
- ☆17Jun 11, 2025Updated 9 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆22Jun 4, 2023Updated 2 years ago
- This repository is a meta package to provide Samsung OneMCC (Memory-Centric Computing) infrastructure.☆32Nov 26, 2025Updated 4 months ago
- ☆16Dec 9, 2023Updated 2 years ago
- Scaling Up Memory Disaggregated Applications with SMART☆34Apr 23, 2024Updated last year
- [DATE'2025, TCAD'2025] Terafly : A Multi-Node FPGA Based Accelerator Design for Efficient Cooperative Inference in LLMs☆30Nov 13, 2025Updated 4 months ago
- Summary of some awesome work for optimizing LLM inference☆225Feb 14, 2026Updated last month
- paper and its code for AI System☆356Feb 10, 2026Updated last month
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Dec 6, 2023Updated 2 years ago
- ☆31Jan 16, 2025Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆43Aug 14, 2024Updated last year
- A low-latency & high-throughput serving engine for LLMs☆486Jan 8, 2026Updated 2 months ago
- Driver to measure vmlaunch latency☆10Jun 28, 2022Updated 3 years ago
- Lab 5 project of MIT-6.5940, deploying LLaMA2-7B-chat on one's laptop with TinyChatEngine.☆18Dec 1, 2023Updated 2 years ago
- ☆32Apr 2, 2025Updated 11 months ago
- ☆25Apr 13, 2025Updated 11 months ago
- Sample Codes using NVSHMEM on Multi-GPU☆30Jan 22, 2023Updated 3 years ago