arogozhnikov / adamw_bfloat16Links
AdamW optimizer for bfloat16 models in pytorch ๐ฅ.
โ39Updated last year
Alternatives and similar repositories for adamw_bfloat16
Users that are interested in adamw_bfloat16 are comparing it to the libraries listed below
Sorting:
- โ32Updated 2 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixingโ49Updated 3 years ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)โ32Updated last year
- RWKV model implementationโ38Updated 2 years ago
- Latent Diffusion Language Modelsโ70Updated 2 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findingsโ46Updated 2 years ago
- MaskedTensors for PyTorchโ38Updated 3 years ago
- High performance pytorch modulesโ18Updated 2 years ago
- AdaCatโ49Updated 3 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012โ49Updated 3 years ago
- โ31Updated this week
- A python library for highly configurable transformers - easing model architecture search and experimentation.โ49Updated 4 years ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"โ59Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention networkโ35Updated 3 years ago
- Source-to-Source Debuggable Derivatives in Pure Pythonโ15Updated last year
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer modelsโ87Updated 2 months ago
- Unofficially Implements https://arxiv.org/abs/2112.05682 to get Linear Memory Cost on Attention for PyTorchโ12Updated 3 years ago
- Triton Implementation of HyperAttention Algorithmโ48Updated 2 years ago
- โ21Updated 2 years ago
- โ29Updated 3 years ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergenceโ61Updated 3 years ago
- See https://github.com/cuda-mode/triton-index/ instead!โ11Updated last year
- Local Attention - Flax module for Jaxโ22Updated 4 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/โฆโ28Updated last year
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weightsโ19Updated 3 years ago
- A place to store reusable transformer components of my own creation or found on the interwebsโ71Updated this week
- Implementation of GateLoop Transformer in Pytorch and Jaxโ91Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amountโฆโ53Updated 2 years ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.โ46Updated last year
- โ13Updated 3 weeks ago