aykutcayir34 / DifferentialTransformerLinks
☆13Updated last year
Alternatives and similar repositories for DifferentialTransformer
Users that are interested in DifferentialTransformer are comparing it to the libraries listed below
Sorting:
- Implementation of the proposed minGRU in Pytorch☆316Updated last month
- Pytorch implementation of the xLSTM model by Beck et al. (2024)☆181Updated last year
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated 2 months ago
- my attempts at implementing various bits of Sepp Hochreiter's new xLSTM architecture☆134Updated last year
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆122Updated last year
- several types of attention modules written in PyTorch for learning purposes☆52Updated last week
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆203Updated last week
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆128Updated last year
- Pytorch (Lightning) implementation of the Mamba model☆34Updated 8 months ago
- Implementation of Infini-Transformer in Pytorch☆112Updated last year
- Experiments on Multi-Head Latent Attention☆99Updated last year
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆106Updated 2 years ago
- ☆293Updated last year
- Implementation of xLSTM in Pytorch from the paper: "xLSTM: Extended Long Short-Term Memory"☆118Updated 2 months ago
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆126Updated last year
- Implementation of Block Recurrent Transformer - Pytorch☆223Updated last year
- Gradient Boosting Reinforcement Learning (GBRL)☆131Updated 2 months ago
- Pytorch Implementation of the sparse attention from the paper: "Generating Long Sequences with Sparse Transformers"☆93Updated 2 months ago
- ☆158Updated 2 months ago
- FlashRNN - Fast RNN Kernels with I/O Awareness☆174Updated 2 months ago
- Fine-Tuning Llama3-8B LLM in a multi-GPU environment using DeepSpeed☆18Updated last year
- Resources about xLSTM by Sepp Hochreiter☆318Updated last year
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆134Updated 2 months ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆211Updated 2 months ago
- Annotated version of the Mamba paper☆493Updated last year
- ☆45Updated 7 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆448Updated 7 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆81Updated last month
- Prune transformer layers☆74Updated last year