thepowerfuldeez / OLMoLinks
My fork os allen AI's OLMo for educational purposes.
☆30Updated 6 months ago
Alternatives and similar repositories for OLMo
Users that are interested in OLMo are comparing it to the libraries listed below
Sorting:
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆144Updated 9 months ago
- Verifiers for LLM Reinforcement Learning☆61Updated 2 months ago
- A repository for research on medium sized language models.☆77Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 8 months ago
- ☆47Updated 10 months ago
- ☆51Updated 7 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Updated last year
- Work in progress.☆69Updated 3 weeks ago
- ☆126Updated last year
- ☆51Updated 7 months ago
- ☆35Updated last year
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆33Updated 3 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆163Updated last year
- This is the official repository for Inheritune.☆111Updated 4 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated this week
- ☆79Updated 10 months ago
- Collection of autoregressive model implementation☆85Updated 2 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆42Updated last year
- ☆80Updated 5 months ago
- The evaluation framework for training-free sparse attention in LLMs☆79Updated last week
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆26Updated 4 months ago
- RWKV-7: Surpassing GPT☆92Updated 7 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆32Updated 10 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆102Updated 2 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆59Updated 8 months ago
- ☆198Updated 6 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- ☆26Updated 5 months ago
- ☆115Updated 4 months ago