GRadient-INformed MoE
☆264Sep 25, 2024Updated last year
Alternatives and similar repositories for GRIN-MoE
Users that are interested in GRIN-MoE are comparing it to the libraries listed below
Sorting:
- Sparse Backpropagation for Mixture-of-Expert Training☆29Jul 2, 2024Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆204Jul 17, 2024Updated last year
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆16Feb 4, 2025Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 6 months ago
- MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning☆361Aug 7, 2024Updated last year
- smolLM with Entropix sampler on pytorch☆149Oct 31, 2024Updated last year
- OLMoE: Open Mixture-of-Experts Language Models☆987Sep 23, 2025Updated 5 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆330Nov 26, 2025Updated 3 months ago
- ☆138Aug 19, 2024Updated last year
- An Open Large Reasoning Model for Real-World Solutions☆1,539Feb 13, 2026Updated last month
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,198Mar 9, 2026Updated last week
- PyTorch implementation of models from the Zamba2 series.☆189Jan 23, 2025Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆361Feb 5, 2026Updated last month
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆200May 28, 2024Updated last year
- [ICLR 2025] LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs☆1,839Jun 24, 2025Updated 8 months ago
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆281Nov 3, 2023Updated 2 years ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Oct 27, 2024Updated last year
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆674Apr 25, 2025Updated 10 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,667Mar 8, 2024Updated 2 years ago
- code for training & evaluating Contextual Document Embedding models☆202May 14, 2025Updated 10 months ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆253Jan 31, 2025Updated last year
- [ICLR 2025] MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts☆264Oct 16, 2024Updated last year
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Visual Embedding Distillation☆71Oct 17, 2025Updated 5 months ago
- Generative Modeling with Bayesian Sample Inference☆24May 17, 2025Updated 10 months ago
- [TMLR 2025] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆125Mar 6, 2026Updated 2 weeks ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆98Dec 17, 2024Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆76Jun 23, 2025Updated 8 months ago
- Official implementation for DenseMixer: Improving MoE Post-Training with Precise Router Gradient☆66Aug 3, 2025Updated 7 months ago
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆51Aug 24, 2025Updated 6 months ago
- ☆10Feb 12, 2024Updated 2 years ago
- Mixture-of-Experts (MoE) Language Model☆196Sep 9, 2024Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆374Dec 12, 2024Updated last year
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆213Jan 6, 2025Updated last year
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆281Oct 28, 2025Updated 4 months ago
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,417Apr 21, 2025Updated 11 months ago
- Train, tune, and infer Bamba model☆137Jun 4, 2025Updated 9 months ago
- Tools for merging pretrained large language models.☆6,867Mar 15, 2026Updated last week
- Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on Apple Silicon☆16May 8, 2025Updated 10 months ago
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆557Oct 28, 2023Updated 2 years ago