Official repo of paper LM2
☆47Feb 13, 2025Updated last year
Alternatives and similar repositories for lm2
Users that are interested in lm2 are comparing it to the libraries listed below
Sorting:
- Everything you need to reproduce "Better plain ViT baselines for ImageNet-1k" in PyTorch, and more☆12Feb 16, 2026Updated 2 weeks ago
- ☆15Mar 2, 2025Updated last year
- Combining SOAP and MUON☆19Feb 11, 2025Updated last year
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated last year
- Control LLM generation format efficiently. A simple version of microsoft/aici in vllm and transformers☆14Jun 7, 2024Updated last year
- ☆13Jul 5, 2024Updated last year
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆140Feb 25, 2026Updated last week
- The official implementation of the ICML 2024 paper "MemoryLLM: Towards Self-Updatable Large Language Models" and "M+: Extending MemoryLLM…☆295Jul 28, 2025Updated 7 months ago
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Jul 17, 2023Updated 2 years ago
- Code associated with the paper: "Few-Shot Self-Rationalization with Natural Language Prompts"☆13Apr 27, 2022Updated 3 years ago
- ☆18Mar 11, 2025Updated 11 months ago
- ☆35Dec 12, 2023Updated 2 years ago
- ☆24May 23, 2025Updated 9 months ago
- PyTorch implementation of StableMask (ICML'24)☆15Jun 27, 2024Updated last year
- Online Preference Alignment for Language Models via Count-based Exploration☆17Jan 14, 2025Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Jun 7, 2024Updated last year
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆60May 28, 2024Updated last year
- ☆129Jun 6, 2025Updated 8 months ago
- ☆20May 30, 2024Updated last year
- CycleQD is a framework for parameter space model merging.☆48Feb 1, 2025Updated last year
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆18Dec 13, 2024Updated last year
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Apr 30, 2025Updated 10 months ago
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆28Jul 15, 2025Updated 7 months ago
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆33Sep 30, 2025Updated 5 months ago
- Linear Attention Sequence Parallelism (LASP)☆89Jun 4, 2024Updated last year
- ☆46May 20, 2025Updated 9 months ago
- ☆122Feb 4, 2026Updated last month
- [EMNLP 2024 Findings] Unlocking Continual Learning Abilities in Language Models☆26Oct 8, 2024Updated last year
- Official Repo for Error-Free Linear Attention is a Free Lunch: Exact Solution from Continuous-Time Dynamics☆71Jan 13, 2026Updated last month
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- PyTorch implementation of Titans.☆34Jan 20, 2025Updated last year
- Offical implementation of "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" (NeurIPS2024 Oral)☆34Jan 18, 2025Updated last year
- The open-source code of MetaStone-S1.☆106Aug 1, 2025Updated 7 months ago
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆85Nov 2, 2025Updated 4 months ago
- ☆48Aug 12, 2025Updated 6 months ago
- ☆39May 20, 2025Updated 9 months ago
- [TMLR 2025] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models