fattorib / Flax-ResNetsLinks
CIFAR10 ResNets implemented in JAX+Flax
☆12Updated 3 years ago
Alternatives and similar repositories for Flax-ResNets
Users that are interested in Flax-ResNets are comparing it to the libraries listed below
Sorting:
- Understanding the interplay between memorization and generalization in neural networks, featuring MAT, a learning algorithm to enhance ro…☆39Updated 8 months ago
- ☆37Updated 3 weeks ago
- Collection of snippets for PyTorch users☆25Updated 3 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆62Updated 4 years ago
- This repository contains the code of the distribution shift framework presented in A Fine-Grained Analysis on Distribution Shift (Wiles e…☆83Updated 2 months ago
- Explores the ideas presented in Deep Ensembles: A Loss Landscape Perspective (https://arxiv.org/abs/1912.02757) by Stanislav Fort, Huiyi …☆65Updated 5 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- ☆133Updated 4 years ago
- ☆70Updated 8 months ago
- ☆19Updated 3 years ago
- Towards Understanding Sharpness-Aware Minimization [ICML 2022]☆35Updated 3 years ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆87Updated last year
- Implementation of Infini-Transformer in Pytorch☆111Updated 7 months ago
- Blog post☆17Updated last year
- Recycling diverse models☆45Updated 2 years ago
- ☆60Updated 3 years ago
- ☆166Updated 2 years ago
- ☆21Updated 2 years ago
- Framework code with wandb, checkpointing, logging, configs, experimental protocols. Useful for fine-tuning models or training from scratc…☆151Updated 2 years ago
- nanoGPT-like codebase for LLM training☆103Updated 3 months ago
- Code to implement the AND-mask and geometric mean to do gradient based optimization, from the paper "Learning explanations that are hard …☆40Updated 4 years ago
- ☆51Updated last year
- ☆58Updated 2 years ago
- ☆23Updated 2 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆67Updated 11 months ago
- ☆75Updated 2 years ago
- Language models scale reliably with over-training and on downstream tasks☆98Updated last year
- ModelDiff: A Framework for Comparing Learning Algorithms☆59Updated 2 years ago
- ☆108Updated 2 years ago
- Official code for the ICML 2024 paper "The Entropy Enigma: Success and Failure of Entropy Minimization"☆53Updated last year