GSYfate / knnlm-limits
Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"
☆23Updated 2 weeks ago
Alternatives and similar repositories for knnlm-limits
Users that are interested in knnlm-limits are comparing it to the libraries listed below
Sorting:
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆30Updated 3 months ago
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year
- Long Context Extension and Generalization in LLMs☆55Updated 7 months ago
- ☆24Updated 3 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆27Updated 7 months ago
- The repository contains code for Adaptive Data Optimization☆24Updated 5 months ago
- Exploration of automated dataset selection approaches at large scales.☆40Updated 2 months ago
- ☆38Updated last year
- ☆31Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆21Updated 8 months ago
- Aioli: A unified optimization framework for language model data mixing☆25Updated 3 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆29Updated last month
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆54Updated last year
- ☆50Updated last year
- ☆45Updated last year
- Codebase for Instruction Following without Instruction Tuning☆34Updated 7 months ago
- ☆19Updated 10 months ago
- Few-shot Learning with Auxiliary Data☆27Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- https://footprints.baulab.info☆17Updated 7 months ago
- ☆52Updated 11 months ago
- ☆39Updated 2 years ago
- ☆20Updated 11 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- Efficient Scaling laws and collaborative pretraining.☆16Updated 3 months ago
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆44Updated 10 months ago
- ☆28Updated 10 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆55Updated 3 weeks ago
- ☆44Updated 8 months ago