dpressel / mintLinks
MinT: Minimal Transformer Library and Tutorials
☆256Updated 2 years ago
Alternatives and similar repositories for mint
Users that are interested in mint are comparing it to the libraries listed below
Sorting:
- A pure-functional implementation of a machine learning transformer model in Python/JAX☆179Updated last month
- Check if you have training samples in your test set☆64Updated 3 years ago
- A library to inspect and extract intermediate layers of PyTorch models.☆473Updated 3 years ago
- An interactive exploration of Transformer programming.☆265Updated last year
- Named tensors with first-class dimensions for PyTorch☆332Updated 2 years ago
- Lite Inference Toolkit (LIT) for PyTorch☆161Updated 3 years ago
- The "tl;dr" on a few notable transformer papers (pre-2022).☆190Updated 2 years ago
- deep learning with pytorch lightning☆1Updated 8 months ago
- Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch☆252Updated 2 years ago
- Highly commented implementations of Transformers in PyTorch☆136Updated last year
- Module 0 - Fundamentals☆104Updated 10 months ago
- Memory mapped numpy arrays of varying shapes☆299Updated last year
- All about the fundamentals and working of Diffusion Models☆158Updated 2 years ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- Unofficial JAX implementations of deep learning research papers☆156Updated 3 years ago
- Seminar on Large Language Models (COMP790-101 at UNC Chapel Hill, Fall 2022)☆310Updated 2 years ago
- Sequence modeling with Mega.☆296Updated 2 years ago
- Outlining techniques for improving the training performance of your PyTorch model without compromising its accuracy☆129Updated 2 years ago
- Language Modeling with the H3 State Space Model☆519Updated last year
- Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale, TACL (2022)☆127Updated 2 weeks ago
- An alternative to convolution in neural networks☆256Updated last year
- Recipes are a standard, well supported set of blueprints for machine learning engineers to rapidly train models using the latest research…☆322Updated this week
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 3 years ago
- My implementation of DeepMind's Perceiver☆63Updated 4 years ago
- Minimal standalone example of diffusion model☆159Updated 3 years ago
- Weakly Supervised End-to-End Learning (NeurIPS 2021)☆157Updated 2 years ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆204Updated last year
- A case study of efficient training of large language models using commodity hardware.☆68Updated 2 years ago
- ☆166Updated 2 years ago