Official repo of paper LM2
☆47Feb 13, 2025Updated last year
Alternatives and similar repositories for lm2
Users that are interested in lm2 are comparing it to the libraries listed below
Sorting:
- Adaptation of titans-pytorch to llama models on HF☆26Mar 6, 2025Updated 11 months ago
- Everything you need to reproduce "Better plain ViT baselines for ImageNet-1k" in PyTorch, and more☆12Feb 16, 2026Updated 2 weeks ago
- [COLM 2025: 1st Workshop on the Application of LLM Explainability to Reasoning and Planning] Latent Chain-of-Thought? Decoding the Depth-…☆17Oct 4, 2025Updated 5 months ago
- ☆15Mar 2, 2025Updated last year
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆55Jul 16, 2024Updated last year
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated last year
- Control LLM generation format efficiently. A simple version of microsoft/aici in vllm and transformers☆14Jun 7, 2024Updated last year
- ☆13Jul 5, 2024Updated last year
- The official implementation of the ICML 2024 paper "MemoryLLM: Towards Self-Updatable Large Language Models" and "M+: Extending MemoryLLM…☆295Jul 28, 2025Updated 7 months ago
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Jul 17, 2023Updated 2 years ago
- ☆18Mar 11, 2025Updated 11 months ago
- ☆35Dec 12, 2023Updated 2 years ago
- ☆24May 23, 2025Updated 9 months ago
- Online Preference Alignment for Language Models via Count-based Exploration☆17Jan 14, 2025Updated last year
- PyTorch implementation of StableMask (ICML'24)☆15Jun 27, 2024Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Jun 7, 2024Updated last year
- This repository contains the code for the paper: SirLLM: Streaming Infinite Retentive LLM☆60May 28, 2024Updated last year
- ☆129Jun 6, 2025Updated 8 months ago
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆18Dec 13, 2024Updated last year
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Apr 30, 2025Updated 10 months ago
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 9 months ago
- The code for "AttentionPredictor: Temporal Pattern Matters for Efficient LLM Inference", Qingyue Yang, Jie Wang, Xing Li, Zhihai Wang, Ch…☆28Jul 15, 2025Updated 7 months ago
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆33Sep 30, 2025Updated 5 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆372Dec 12, 2024Updated last year
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- ☆46May 20, 2025Updated 9 months ago
- ☆122Feb 4, 2026Updated last month
- [EMNLP 2024 Findings] Unlocking Continual Learning Abilities in Language Models☆26Oct 8, 2024Updated last year
- models and tools for -What makes ImageNet good for Transfer Learning?☆25Jan 23, 2018Updated 8 years ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- Offical implementation of "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" (NeurIPS2024 Oral)☆34Jan 18, 2025Updated last year
- The open-source code of MetaStone-S1.☆106Aug 1, 2025Updated 7 months ago
- ☆39May 20, 2025Updated 9 months ago
- ☆48Aug 12, 2025Updated 6 months ago
- [TMLR 2025] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆125Feb 15, 2026Updated 2 weeks ago
- ☆128Mar 31, 2024Updated last year
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- Code for ICML 2024 paper☆35Sep 18, 2025Updated 5 months ago