dpressel / mint
MinT: Minimal Transformer Library and Tutorials
☆254Updated 2 years ago
Alternatives and similar repositories for mint:
Users that are interested in mint are comparing it to the libraries listed below
- A pure-functional implementation of a machine learning transformer model in Python/JAX☆177Updated this week
- Module 0 - Fundamentals☆102Updated 8 months ago
- A library to inspect and extract intermediate layers of PyTorch models.☆473Updated 2 years ago
- All about the fundamentals and working of Diffusion Models☆155Updated 2 years ago
- Named tensors with first-class dimensions for PyTorch☆320Updated last year
- All about the fundamental blocks of TF and JAX!☆274Updated 3 years ago
- Seminar on Large Language Models (COMP790-101 at UNC Chapel Hill, Fall 2022)☆310Updated 2 years ago
- An interactive exploration of Transformer programming.☆262Updated last year
- Host repository for the "Reproducible Deep Learning" PhD course☆406Updated 3 years ago
- The "tl;dr" on a few notable transformer papers (pre-2022).☆190Updated 2 years ago
- Check if you have training samples in your test set☆64Updated 2 years ago
- Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale, TACL (2022)☆125Updated 6 months ago
- 100 exercises to learn JAX☆576Updated 2 years ago
- Resources from the EleutherAI Math Reading Group☆53Updated 2 months ago
- FasterAI: Prune and Distill your models with FastAI and PyTorch☆248Updated last month
- Convert scikit-learn models to PyTorch modules☆162Updated 11 months ago
- An assignment for CMU CS11-711 Advanced NLP, building NLP systems from scratch☆172Updated 2 years ago
- ☆430Updated 6 months ago
- deep learning with pytorch lightningUpdated 6 months ago
- Functional deep learning☆108Updated 2 years ago
- Live Python Notebooks with any Editor☆279Updated 2 years ago
- A Pytree Module system for Deep Learning in JAX☆214Updated 2 years ago
- Library for 8-bit optimizers and quantization routines.☆716Updated 2 years ago
- Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions☆258Updated last year
- Annotations of the interesting ML papers I read☆240Updated last week
- Implementation of Flash Attention in Jax☆206Updated last year
- An alternative to convolution in neural networks☆254Updated last year
- Unofficial JAX implementations of deep learning research papers☆156Updated 2 years ago
- Enabling easy statistical significance testing for deep neural networks.☆335Updated 10 months ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆313Updated last year