GSYfate / knnlm-limitsLinks
Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"
☆24Updated 6 months ago
Alternatives and similar repositories for knnlm-limits
Users that are interested in knnlm-limits are comparing it to the libraries listed below
Sorting:
- Long Context Extension and Generalization in LLMs☆62Updated last year
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated last year
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆47Updated 2 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last month
- Exploration of automated dataset selection approaches at large scales.☆48Updated 8 months ago
- ☆39Updated last year
- The repository contains code for Adaptive Data Optimization☆28Updated 11 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆60Updated last year
- ☆53Updated last year
- ☆51Updated last year
- ☆76Updated last year
- ☆57Updated last year
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆29Updated last month
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- ☆32Updated last year
- ☆35Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆85Updated last year
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- ☆52Updated last year
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆31Updated 9 months ago
- ☆23Updated 3 weeks ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- ☆20Updated 2 weeks ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆55Updated 2 years ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Replicating O1 inference-time scaling laws☆90Updated 11 months ago
- Pile Deduplication Code☆19Updated 2 years ago
- ☆75Updated last year