Official code for the paper "Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark"
☆29Jun 30, 2025Updated 8 months ago
Alternatives and similar repositories for MoE-Quantization
Users that are interested in MoE-Quantization are comparing it to the libraries listed below
Sorting:
- bulk image downloader freeware, reddit bulk image downloader, bulk image downloader extension, bulk image downloader from url, bulk image…☆25Feb 19, 2026Updated last week
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆102Jun 20, 2025Updated 8 months ago
- [ICML 2024] Code for the paper "MoE-RBench: Towards Building Reliable Language Models with Sparse Mixture-of-Experts"☆10Jul 1, 2024Updated last year
- Official PyTorch implementation of CD-MOE☆12Mar 29, 2025Updated 11 months ago
- Generic implementation of the Number Theoretic Transform in the context of cryptography applications☆14Aug 13, 2025Updated 6 months ago
- Code to reproduce the experiments of the ICLR24-paper: "Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging"☆12Oct 14, 2025Updated 4 months ago
- ☆12Oct 4, 2023Updated 2 years ago
- rust sdk for zkWasm☆11Feb 11, 2026Updated 2 weeks ago
- Web Assembly low level implementation of pairing friendly curves.☆15Feb 10, 2026Updated 2 weeks ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆14Feb 4, 2025Updated last year
- Official PyTorch implementation of "GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance" (ICML 2025)☆50Jul 6, 2025Updated 7 months ago
- Educational Version of Lookup Argument☆12Apr 10, 2025Updated 10 months ago
- Starky implementation of Bls12-381☆13May 16, 2024Updated last year
- ZK proofs for Brainfuck execution using powdr☆17Aug 28, 2024Updated last year
- Official code for the paper "HEXA-MoE: Efficient and Heterogeneous-Aware MoE Acceleration with Zero Computation Redundancy"☆15Mar 6, 2025Updated 11 months ago
- ☆21May 2, 2025Updated 9 months ago
- [ECCV 2024] Code for the paper "Mew: Multiplexed Immunofluorescence Image Analysis through an Efficient Multiplex Network"☆17Jul 27, 2024Updated last year
- A set of tooling of halo2 circuits verification in Move environments☆16Feb 14, 2026Updated 2 weeks ago
- [ICML2025] KVTuner: Sensitivity-Aware Layer-wise Mixed Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference☆26Jan 27, 2026Updated last month
- 👩💻 Circom compiler, snippets, hover and language support for Visual Studio Code☆16Apr 20, 2023Updated 2 years ago
- ☆20Nov 3, 2025Updated 3 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Apr 21, 2025Updated 10 months ago
- Circuits for Pluto's `web-prover`☆25Apr 25, 2025Updated 10 months ago
- Rust bindings to the Halide runtime☆20Apr 11, 2023Updated 2 years ago
- GPU-acceselerated cryptography libraries for ZKsync☆22Feb 20, 2026Updated last week
- ☆23Nov 26, 2024Updated last year
- Repo for EmbedLLM: Learning Compact Representations of Large Language Models☆27Sep 25, 2025Updated 5 months ago
- BN254 Pairing Implementation in Noir☆23Aug 23, 2023Updated 2 years ago
- A parallel proving service for ZKM.☆22Dec 18, 2025Updated 2 months ago
- Code repo for efficient quantized MoE inference with mixture of low-rank compensators☆31Apr 14, 2025Updated 10 months ago
- A utility for generating conversational podcasts with AI text-to-speech, inspired by Google's NotebookLM.☆20Sep 16, 2024Updated last year
- ☆24Mar 2, 2025Updated 11 months ago
- A library of gadgets compatible with bellpepper and bellperson (contact: @huitseeker)☆18Mar 3, 2025Updated 11 months ago
- ☆23Jun 12, 2025Updated 8 months ago
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆66Feb 12, 2025Updated last year
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆25Oct 5, 2024Updated last year
- High performance EraVM for zkSync.☆23Feb 20, 2026Updated last week
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆23Nov 11, 2025Updated 3 months ago
- D^2-MoE: Delta Decompression for MoE-based LLMs Compression☆72Mar 25, 2025Updated 11 months ago