socialfoundations / tttlmLinks
Test-time-training on nearest neighbors for large language models
☆46Updated last year
Alternatives and similar repositories for tttlm
Users that are interested in tttlm are comparing it to the libraries listed below
Sorting:
- A Sober Look at Language Model Reasoning☆85Updated 2 weeks ago
- Code for "Reasoning to Learn from Latent Thoughts"☆121Updated 6 months ago
- ☆50Updated last year
- ☆51Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆184Updated last year
- ☆98Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆57Updated last year
- Function Vectors in Large Language Models (ICLR 2024)☆181Updated 6 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆131Updated 3 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆136Updated 4 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆103Updated 2 weeks ago
- ☆18Updated last year
- ☆29Updated last year
- Official code for SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆44Updated 6 months ago
- [COLM 2025] Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆132Updated 2 months ago
- AI Logging for Interpretability and Explainability🔬☆129Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆117Updated last year
- ☆77Updated 3 years ago
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆61Updated 2 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆40Updated 5 months ago
- ☆55Updated 5 months ago
- ☆52Updated 6 months ago
- ☆74Updated 11 months ago
- Learning adapter weights from task descriptions☆19Updated last year
- ☆45Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆164Updated 7 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆82Updated 7 months ago
- Exploration of automated dataset selection approaches at large scales.☆47Updated 7 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 6 months ago