google-deepmind / emergent_in_context_learningLinks
☆84Updated 10 months ago
Alternatives and similar repositories for emergent_in_context_learning
Users that are interested in emergent_in_context_learning are comparing it to the libraries listed below
Sorting:
- ☆54Updated 2 years ago
- ☆34Updated last year
- Code for LaMPP: Language Models as Probabilistic Priors for Perception and Action☆37Updated 2 years ago
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆106Updated last year
- ☆85Updated last year
- Language modeling via stochastic processes. Oral @ ICLR 2022.☆138Updated 2 years ago
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆108Updated last year
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- ☆95Updated last year
- Distributional Generalization in NLP. A roadmap.☆88Updated 2 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆75Updated last year
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- ☆54Updated 2 years ago
- ☆55Updated last month
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆43Updated last year
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- The official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We…☆46Updated last year
- ☆34Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated last year
- Directional Preference Alignment☆57Updated 9 months ago
- ☆95Updated 11 months ago
- ☆13Updated last month
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)☆63Updated 10 months ago
- Source code for the paper "Prefix Language Models are Unified Modal Learners"☆43Updated 2 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆33Updated last week
- Self-Supervised Alignment with Mutual Information☆19Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆93Updated 3 years ago
- Teaching Models to Express Their Uncertainty in Words☆39Updated 3 years ago
- ☆62Updated 2 years ago
- ☆34Updated 3 months ago