tensorops / TransformerXLinks
Flexible Python library providing building blocks (layers) for reproducible Transformers research (Tensorflow β
, Pytorch π, and Jax π)
β53Updated 2 years ago
Alternatives and similar repositories for TransformerX
Users that are interested in TransformerX are comparing it to the libraries listed below
Sorting:
- β133Updated 2 years ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.β122Updated last year
- https://slds-lmu.github.io/seminar_multimodal_dl/β171Updated 3 years ago
- Outlining techniques for improving the training performance of your PyTorch model without compromising its accuracyβ129Updated 2 years ago
- Gradient Boosting Reinforcement Learning (GBRL)β135Updated 2 months ago
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog poβ¦β92Updated 2 years ago
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.β298Updated last year
- Highly commented implementations of Transformers in PyTorchβ138Updated 2 years ago
- This repository contains a better implementation of Kolmogorov-Arnold networksβ63Updated 7 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"β103Updated last year
- RAGs: Simple implementations of Retrieval Augmented Generation (RAG) Systemsβ141Updated last year
- Documented and Unit Tested educational Deep Learning framework with Autograd from scratch.β122Updated last year
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 linesβ196Updated last year
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of newβ¦β126Updated last year
- A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal, but extensible training loop β¦β193Updated 3 weeks ago
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.β53Updated last year
- MinT: Minimal Transformer Library and Tutorialsβ260Updated 3 years ago
- β82Updated last year
- Implementation of the Llama architecture with RLHF + Q-learningβ170Updated 11 months ago
- Presents comprehensive benchmarks of XLA-compatible pre-trained models in Keras.β37Updated 2 years ago
- Collection of Pytorch lightning tutorial form as rich scripts automatically transformed to ipython notebooks.β319Updated 5 months ago
- Swarming algorithms like PSO, Ant Colony, Sakana, and more in PyTorch πβ136Updated last week
- β75Updated 3 years ago
- Visualising Losses in Deep Neural Networksβ16Updated last year
- LoRA and DoRA from Scratch Implementationsβ215Updated last year
- Automatic gradient descentβ217Updated 2 years ago
- Notebooks to demonstrate TimmWrapperβ16Updated last year
- Collection of tests performed during the study of the new Kolmogorov-Arnold Neural Networks (KAN)β41Updated 11 months ago
- my attempts at implementing various bits of Sepp Hochreiter's new xLSTM architectureβ134Updated last year
- NYU Deep Learning Fall 2022β63Updated last year