nightdessert / Retrieval_HeadLinks
open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality
☆230Updated last year
Alternatives and similar repositories for Retrieval_Head
Users that are interested in Retrieval_Head are comparing it to the libraries listed below
Sorting:
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆245Updated 4 months ago
- ☆203Updated 9 months ago
- The HELMET Benchmark☆198Updated last month
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆273Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆455Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆168Updated 10 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆183Updated 8 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆120Updated last year
- ☆223Updated 10 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆195Updated last year
- ☆274Updated 2 years ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆110Updated 11 months ago
- AnchorAttention: Improved attention for LLMs long-context training☆213Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆127Updated last year
- The repo for In-context Autoencoder☆165Updated last year
- ☆328Updated 7 months ago
- ☆215Updated 11 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆259Updated 8 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆81Updated 2 years ago
- A simple unified framework for evaluating LLMs☆258Updated 9 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆59Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆147Updated last year
- ☆78Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆200Updated last month
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆168Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆326Updated last year
- A Survey on Data Selection for Language Models☆254Updated 9 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆190Updated 9 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆370Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.☆98Updated 3 months ago