RAIVNLab / MatFormer-OLMoLinks
Code repository for the public reproduction of the language modelling experiments on "MatFormer: Nested Transformer for Elastic Inference"
☆22Updated last year
Alternatives and similar repositories for MatFormer-OLMo
Users that are interested in MatFormer-OLMo are comparing it to the libraries listed below
Sorting:
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆23Updated 7 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆42Updated last year
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆24Updated 6 months ago
- ☆21Updated 3 months ago
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated 11 months ago
- Using FlexAttention to compute attention with different masking patterns☆44Updated 9 months ago
- ☆25Updated last year
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- ☆15Updated last year
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆18Updated 2 weeks ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆31Updated last year
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆32Updated 3 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆29Updated 9 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated 10 months ago
- ☆47Updated 2 weeks ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Beyond KV Caching: Shared Attention for Efficient LLMs☆19Updated 11 months ago
- Code for T-MARS data filtering☆35Updated last year
- Lottery Ticket Adaptation☆39Updated 7 months ago
- ☆79Updated 10 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆72Updated this week
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 2 weeks ago
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆23Updated last year
- Here we will test various linear attention designs.☆59Updated last year
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Updated 2 months ago
- ☆14Updated last month
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆27Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆35Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 4 months ago