languini-kitchen / languini-kitchen
The official Languini Kitchen repository
☆14Updated 8 months ago
Alternatives and similar repositories for languini-kitchen:
Users that are interested in languini-kitchen are comparing it to the libraries listed below
- ☆50Updated 3 months ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆57Updated last year
- ☆51Updated 8 months ago
- ☆32Updated last year
- ☆46Updated last year
- ☆37Updated last year
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆154Updated last month
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆97Updated last year
- ☆24Updated last year
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- Blog post☆16Updated 11 months ago
- ☆30Updated 2 months ago
- A centralized place for deep thinking code and experiments☆79Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆102Updated last month
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 7 months ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆59Updated 3 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆59Updated 4 months ago
- ☆80Updated 6 months ago
- Parallelizing non-linear sequential models over the sequence length☆49Updated 2 weeks ago
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Updated 7 months ago
- ☆59Updated 2 years ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆82Updated last year
- Transformers with doubly stochastic attention☆44Updated 2 years ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆52Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆79Updated last year
- ☆22Updated 3 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆66Updated 2 years ago
- Code for "The Expressive Power of Low-Rank Adaptation".☆19Updated 9 months ago
- ☆26Updated 11 months ago
- RWKV model implementation☆37Updated last year