tensorops / TransformerXLinks
Flexible Python library providing building blocks (layers) for reproducible Transformers research (Tensorflow β
, Pytorch π, and Jax π)
β53Updated last year
Alternatives and similar repositories for TransformerX
Users that are interested in TransformerX are comparing it to the libraries listed below
Sorting:
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of newβ¦β123Updated last year
- β134Updated 2 years ago
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog poβ¦β91Updated 2 years ago
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.β297Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"β103Updated 11 months ago
- Outlining techniques for improving the training performance of your PyTorch model without compromising its accuracyβ129Updated 2 years ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 linesβ195Updated last year
- Presents comprehensive benchmarks of XLA-compatible pre-trained models in Keras.β37Updated 2 years ago
- Documented and Unit Tested educational Deep Learning framework with Autograd from scratch.β122Updated last year
- Gradient Boosting Reinforcement Learning (GBRL)β126Updated 2 weeks ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.β121Updated last year
- MinT: Minimal Transformer Library and Tutorialsβ259Updated 3 years ago
- Highly commented implementations of Transformers in PyTorchβ137Updated 2 years ago
- β140Updated 3 weeks ago
- β75Updated 3 years ago
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.β53Updated last year
- A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal, but extensible training loop β¦β192Updated 5 months ago
- A miniture AI training framework for PyTorchβ42Updated 9 months ago
- β82Updated last year
- Swarming algorithms like PSO, Ant Colony, Sakana, and more in PyTorch πβ134Updated last month
- Various transformers for FSDP researchβ38Updated 3 years ago
- ML/DL Math and Method notesβ64Updated last year
- Train fastai models faster (and other useful tools)β72Updated 5 months ago
- HomebrewNLP in JAX flavour for maintable TPU-Trainingβ51Updated last year
- my attempts at implementing various bits of Sepp Hochreiter's new xLSTM architectureβ133Updated last year
- Collection of Pytorch lightning tutorial form as rich scripts automatically transformed to ipython notebooks.β318Updated 3 months ago
- Unofficial JAX implementations of deep learning research papersβ159Updated 3 years ago
- β150Updated last year
- Functional local implementations of main model parallelism approachesβ96Updated 2 years ago
- Cyclemoid implementation for PyTorchβ90Updated 3 years ago