GRadient-INformed MoE
☆264Sep 25, 2024Updated last year
Alternatives and similar repositories for GRIN-MoE
Users that are interested in GRIN-MoE are comparing it to the libraries listed below
Sorting:
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆203Jul 17, 2024Updated last year
- Sparse Backpropagation for Mixture-of-Expert Training☆29Jul 2, 2024Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 5 months ago
- [NAACL'25 🏆 SAC Award] Official code for "Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert…☆15Feb 4, 2025Updated last year
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Visual Embedding Distillation☆71Oct 17, 2025Updated 4 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆327Nov 26, 2025Updated 3 months ago
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆51Aug 24, 2025Updated 6 months ago
- PyTorch implementation of models from the Zamba2 series.☆187Jan 23, 2025Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆358Feb 5, 2026Updated 3 weeks ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,190Sep 30, 2025Updated 5 months ago
- MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning☆361Aug 7, 2024Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Oct 27, 2024Updated last year
- Generative Modeling with Bayesian Sample Inference☆24May 17, 2025Updated 9 months ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆674Apr 25, 2025Updated 10 months ago
- smolLM with Entropix sampler on pytorch☆149Oct 31, 2024Updated last year
- An Open Large Reasoning Model for Real-World Solutions☆1,532Feb 13, 2026Updated 2 weeks ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆251Jan 31, 2025Updated last year
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆213Jan 6, 2025Updated last year
- OLMoE: Open Mixture-of-Experts Language Models☆974Sep 23, 2025Updated 5 months ago
- [ICLR 2025] LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs☆1,830Jun 24, 2025Updated 8 months ago
- ☆67Mar 30, 2025Updated 11 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198May 28, 2024Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆279Nov 3, 2023Updated 2 years ago
- Mixture-of-Experts (MoE) Language Model☆196Sep 9, 2024Updated last year
- code for training & evaluating Contextual Document Embedding models☆201May 14, 2025Updated 9 months ago
- Long Context Transfer from Language to Vision☆402Mar 18, 2025Updated 11 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆238Oct 14, 2025Updated 4 months ago
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆555Oct 28, 2023Updated 2 years ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆59Mar 17, 2025Updated 11 months ago
- Eagle: Frontier Vision-Language Models with Data-Centric Strategies☆930Oct 25, 2025Updated 4 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,660Mar 8, 2024Updated last year
- ☆137Aug 19, 2024Updated last year
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆280Oct 28, 2025Updated 4 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆344Dec 16, 2025Updated 2 months ago
- Official implementation of the paper "Linear Transformers with Learnable Kernel Functions are Better In-Context Models"☆169Jan 16, 2025Updated last year
- [COLM 2025] "C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing"☆20Apr 9, 2025Updated 10 months ago
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,410Apr 21, 2025Updated 10 months ago
- Train, tune, and infer Bamba model☆137Jun 4, 2025Updated 8 months ago