brendenlake / MLC-MLLinks
Applying Behaviorally-Informed Meta-Learning (BIML) to machine learning benchmarks
☆52Updated last year
Alternatives and similar repositories for MLC-ML
Users that are interested in MLC-ML are comparing it to the libraries listed below
Sorting:
- Meta-Learning for Compositionality (MLC) for modeling human behavior☆143Updated last year
- ☆70Updated 3 years ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆191Updated 2 years ago
- ☆211Updated 2 years ago
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆106Updated last year
- ☆30Updated 2 years ago
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆79Updated 3 years ago
- Neural Networks and the Chomsky Hierarchy☆211Updated last year
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated 2 years ago
- Interpretable text embeddings by asking LLMs yes/no questions (NeurIPS 2024)☆45Updated 11 months ago
- ☆83Updated 2 years ago
- Materials for ConceptARC paper☆105Updated last year
- ☆185Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 3 years ago
- Extracting spatial and temporal world models from LLMs☆257Updated 2 years ago
- ☆120Updated last year
- We develop benchmarks and analysis tools to evaluate the causal reasoning abilities of LLMs.☆131Updated last year
- ☆34Updated last year
- Official implementation of FIND (NeurIPS '23) Function Interpretation Benchmark and Automated Interpretability Agents☆51Updated last year
- ☆139Updated 3 months ago
- Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers…☆27Updated 2 years ago
- Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"☆210Updated 2 years ago
- Universal Neurons in GPT2 Language Models☆30Updated last year
- ☆144Updated last year
- ☆128Updated last year
- ☆104Updated last year
- ☆241Updated last year
- Reasoning with Language Model is Planning with World Model☆175Updated 2 years ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆99Updated 4 years ago