aykutcayir34 / DifferentialTransformerLinks
☆13Updated 10 months ago
Alternatives and similar repositories for DifferentialTransformer
Users that are interested in DifferentialTransformer are comparing it to the libraries listed below
Sorting:
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated 3 weeks ago
- several types of attention modules written in PyTorch for learning purposes☆53Updated 10 months ago
- Implementation of the proposed minGRU in Pytorch☆300Updated 5 months ago
- Pytorch implementation of the xLSTM model by Beck et al. (2024)☆169Updated last year
- my attempts at implementing various bits of Sepp Hochreiter's new xLSTM architecture☆131Updated last year
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆121Updated last year
- ☆292Updated 7 months ago
- Implementation of Infini-Transformer in Pytorch☆111Updated 7 months ago
- Pytorch (Lightning) implementation of the Mamba model☆29Updated 3 months ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆105Updated last year
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆124Updated last year
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆88Updated last month
- Kolmogorov-Arnold Networks (KAN) using Chebyshev polynomials instead of B-splines.☆384Updated last year
- Implementation of Agent Attention in Pytorch☆91Updated last year
- Huggingface compatible implementation of RetNet (Retentive Networks, https://arxiv.org/pdf/2307.08621.pdf) including parallel, recurrent,…☆226Updated last year
- Distributed training (multi-node) of a Transformer model☆76Updated last year
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year
- Experiments on Multi-Head Latent Attention☆94Updated 11 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆68Updated 2 weeks ago
- ☆37Updated last year
- LoRA and DoRA from Scratch Implementations☆209Updated last year
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆182Updated last week
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆29Updated 5 months ago
- An easy to use PyTorch implementation of the Kolmogorov Arnold Network and a few novel variations☆184Updated 8 months ago
- Tutorial for how to build BERT from scratch☆97Updated last year
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆115Updated 8 months ago
- nanoGRPO is a lightweight implementation of Group Relative Policy Optimization (GRPO)☆113Updated 3 months ago
- An extension of the nanoGPT repository for training small MOE models.☆172Updated 5 months ago
- Naively combining transformers and Kolmogorov-Arnold Networks to learn and experiment☆36Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 7 months ago