peerdavid / layerwise-batch-entropy
Layerwise Batch Entropy Regularization
☆22Updated 2 years ago
Alternatives and similar repositories for layerwise-batch-entropy
Users that are interested in layerwise-batch-entropy are comparing it to the libraries listed below
Sorting:
- ☆33Updated 2 years ago
- [ICML 2024] SIRFShampoo: Structured inverse- and root-free Shampoo in PyTorch (https://arxiv.org/abs/2402.03496)☆14Updated 6 months ago
- A GPT, made only of MLPs, in Jax☆58Updated 3 years ago
- AdaCat☆49Updated 2 years ago
- Code for ICLR 2021 Paper, "Anytime Sampling for Autoregressive Models via Ordered Autoencoding"☆26Updated last year
- JAX implementation of Learning to learn by gradient descent by gradient descent☆27Updated 7 months ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- ☆74Updated 2 years ago
- ☆29Updated 2 years ago
- code for "Semi-Discrete Normalizing Flows through Differentiable Tessellation"☆26Updated 2 years ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- Sequence Modeling with Structured State Spaces☆64Updated 2 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆50Updated 3 years ago
- Blog post☆17Updated last year
- FID computation in Jax/Flax.☆27Updated 10 months ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆100Updated 2 years ago
- A simple Transformer where the softmax has been replaced with normalization☆20Updated 4 years ago
- Official code for Long Expressive Memory (ICLR 2022, Spotlight)☆69Updated 3 years ago
- MTAdam: Automatic Balancing of Multiple Training Loss Terms☆36Updated 4 years ago
- Re-implementation of 'Grokking: Generalization beyond overfitting on small algorithmic datasets'☆38Updated 3 years ago
- A simple implementation of a deep linear Pytorch module☆21Updated 4 years ago
- 👩 Pytorch and Jax code for the Madam optimiser.☆51Updated 4 years ago
- ☆19Updated 3 years ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆30Updated last year
- Easy-to-use AdaHessian optimizer (PyTorch)☆78Updated 4 years ago
- ☆37Updated 3 years ago
- ☆47Updated 2 years ago
- ☆9Updated last year
- Transformers with doubly stochastic attention☆45Updated 2 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 3 years ago