likenneth / othello_worldLinks
Emergent world representations: Exploring a sequence model trained on a synthetic task
☆188Updated 2 years ago
Alternatives and similar repositories for othello_world
Users that are interested in othello_world are comparing it to the libraries listed below
Sorting:
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 2 years ago
- ☆184Updated last year
- ☆209Updated 2 years ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆209Updated this week
- [NeurIPS 2023] Learning Transformer Programs☆163Updated last year
- ☆276Updated last year
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆78Updated 3 years ago
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆208Updated 2 months ago
- ☆122Updated last year
- Materials for ConceptARC paper☆99Updated 9 months ago
- Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"☆209Updated 2 years ago
- ☆126Updated last year
- Erasing concepts from neural representations with provable guarantees☆232Updated 6 months ago
- ☆68Updated 2 years ago
- ☆237Updated 10 months ago
- A repository for transformer critique learning and generation☆90Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆211Updated 8 months ago
- ☆28Updated last year
- ☆103Updated 6 months ago
- ☆83Updated last year
- Sparse Autoencoder Training Library☆54Updated 3 months ago
- Understand and test language model architectures on synthetic tasks.☆221Updated last month
- ☆139Updated last month
- A library for efficient patching and automatic circuit discovery.☆76Updated last month
- ☆35Updated last year
- Neural Networks and the Chomsky Hierarchy☆208Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆191Updated last year
- Mechanistic Interpretability for Transformer Models☆51Updated 3 years ago
- ☆105Updated 6 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆514Updated 2 weeks ago