abietti / transformer-birthLinks
☆19Updated last year
Alternatives and similar repositories for transformer-birth
Users that are interested in transformer-birth are comparing it to the libraries listed below
Sorting:
- ☆33Updated last year
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆67Updated last year
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated last year
- ☆53Updated last year
- ☆31Updated last year
- Experiments on the impact of depth in transformers and SSMs.☆34Updated 11 months ago
- ☆36Updated 3 years ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆40Updated 2 years ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- ☆70Updated 10 months ago
- ☆45Updated last year
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆32Updated last year
- Universal Neurons in GPT2 Language Models☆30Updated last year
- Omnigrok: Grokking Beyond Algorithmic Data☆62Updated 2 years ago
- Official Code Repository for the paper "Key-value memory in the brain"☆28Updated 7 months ago
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆108Updated last year
- Curse-of-memory phenomenon of RNNs in sequence modelling☆19Updated 5 months ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆33Updated 3 months ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆97Updated 4 years ago
- ☆47Updated 8 months ago
- Official repository for our paper, Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Mode…☆17Updated 10 months ago
- ☆62Updated 3 years ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆32Updated 2 years ago
- ☆69Updated 2 years ago
- Code for the paper "Pretraining task diversity and the emergence of non-Bayesian in-context learning for regression"☆23Updated 2 years ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆35Updated 3 years ago
- ☆106Updated 7 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆70Updated last year