AllanYangZhou / universal_neural_functionalLinks
☆52Updated last year
Alternatives and similar repositories for universal_neural_functional
Users that are interested in universal_neural_functional are comparing it to the libraries listed below
Sorting:
- The Energy Transformer block, in JAX☆59Updated last year
- Deep Networks Grok All the Time and Here is Why☆37Updated last year
- A centralized place for deep thinking code and experiments☆86Updated 2 years ago
- ☆57Updated 11 months ago
- ☆118Updated 3 months ago
- Implementation of PSGD optimizer in JAX☆34Updated 8 months ago
- NF-Layers for constructing neural functionals.☆88Updated last year
- Latent Program Network (from the "Searching Latent Program Spaces" paper)☆96Updated 6 months ago
- ☆34Updated 9 months ago
- ☆53Updated last year
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆38Updated 2 years ago
- ☆233Updated 7 months ago
- A simple library for scaling up JAX programs☆143Updated 10 months ago
- ☆69Updated last year
- Universal Neurons in GPT2 Language Models☆30Updated last year
- ☆40Updated 3 years ago
- 🧱 Modula software package☆237Updated 3 weeks ago
- Parallelizing non-linear sequential models over the sequence length☆54Updated 2 months ago
- Scalable and Stable Parallelization of Nonlinear RNNS☆22Updated 2 weeks ago
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆59Updated 3 years ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆88Updated last year
- Code for "Meta Learning Backpropagation And Improving It" @ NeurIPS 2021 https://arxiv.org/abs/2012.14905☆33Updated 3 years ago
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆79Updated 3 years ago
- ICML 2022: Learning Iterative Reasoning through Energy Minimization☆48Updated 2 years ago
- ☆34Updated last year
- Meta-learning inductive biases in the form of useful conserved quantities.☆37Updated 2 years ago
- ☆32Updated 11 months ago
- We integrate discrete diffusion models with neurosymbolic predictors for scalable and calibrated learning and reasoning☆40Updated 3 months ago
- 📄Small Batch Size Training for Language Models☆60Updated 2 weeks ago
- [ICLR'25] Artificial Kuramoto Oscillatory Neurons☆101Updated last month