wesg52 / world-modelsLinks
Extracting spatial and temporal world models from LLMs
☆257Updated 2 years ago
Alternatives and similar repositories for world-models
Users that are interested in world-models are comparing it to the libraries listed below
Sorting:
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆197Updated 2 years ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆234Updated 5 months ago
- Sotopia: an Open-ended Social Learning Environment (ICLR 2024 spotlight)☆272Updated last week
- [NeurIPS '23 Spotlight] Thought Cloning: Learning to Think while Acting by Imitating Human Thinking☆267Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- ☆214Updated 2 years ago
- ☆129Updated last year
- ☆323Updated last year
- Functional Benchmarks and the Reasoning Gap☆90Updated last year
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"☆341Updated 2 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆152Updated 11 months ago
- ☆301Updated 2 years ago
- Tools for understanding how transformer predictions are built layer-by-layer☆559Updated 5 months ago
- ☆139Updated last year
- Code for Arxiv 2023: Improving Language Model Negociation with Self-Play and In-Context Learning from AI Feedback☆208Updated 2 years ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆269Updated 8 months ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆225Updated 3 weeks ago
- Meta-Learning for Compositionality (MLC) for modeling human behavior☆145Updated last month
- ☆98Updated last year
- ☆135Updated last year
- A mechanistic approach for understanding and detecting factual errors of large language models.☆49Updated last year
- Scaling Data-Constrained Language Models☆343Updated 6 months ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆100Updated 2 years ago
- ☆283Updated last year
- The Prism Alignment Project☆87Updated last year
- Interpretable text embeddings by asking LLMs yes/no questions (NeurIPS 2024)☆46Updated last year
- Simple next-token-prediction for RLHF☆227Updated 2 years ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆216Updated 2 weeks ago
- ☆137Updated 2 years ago
- Open source interpretability artefacts for R1.☆165Updated 8 months ago