RAIVNLab / MatFormer-OLMo
Code repository for the public reproduction of the language modelling experiments on "MatFormer: Nested Transformer for Elastic Inference"
☆18Updated last year
Alternatives and similar repositories for MatFormer-OLMo:
Users that are interested in MatFormer-OLMo are comparing it to the libraries listed below
- Using FlexAttention to compute attention with different masking patterns☆42Updated 6 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆26Updated 6 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Latest Weight Averaging (NeurIPS HITY 2022)☆29Updated last year
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆19Updated 3 months ago
- ☆25Updated last year
- Code for paper: "LASeR: Learning to Adaptively Select Reward Models with Multi-Arm Bandits"☆13Updated 5 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆39Updated last year
- [Oral; Neurips OPT2024 ] μLO: Compute-Efficient Meta-Generalization of Learned Optimizers☆12Updated last week
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆30Updated 9 months ago
- Official code for the paper "Attention as a Hypernetwork"☆25Updated 9 months ago
- ☆15Updated last year
- Beyond KV Caching: Shared Attention for Efficient LLMs☆16Updated 8 months ago
- This repository contains code for the MicroAdam paper.☆17Updated 3 months ago
- ACL 2023☆39Updated last year
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated last year
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆14Updated 8 months ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆46Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- ☆74Updated 7 months ago
- Implementation of Hyena Hierarchy in JAX☆10Updated last year
- Code for T-MARS data filtering☆35Updated last year
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆18Updated 4 months ago
- Code for "Merging Text Transformers from Different Initializations"☆19Updated last month
- Aioli: A unified optimization framework for language model data mixing☆22Updated 2 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆79Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆53Updated 7 months ago
- Official implementation of ECCV24 paper: POA☆24Updated 7 months ago
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆30Updated last week
- ☆52Updated 8 months ago