nanowell / Differential-Transformer-PyTorch
PyTorch implementation of the Differential-Transformer architecture for sequence modeling, specifically tailored as a decoder-only model similar to large language models (LLMs). The architecture incorporates a novel Differential Attention mechanism, Multi-Head structure, RMSNorm, and SwiGLU.
☆30Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for Differential-Transformer-PyTorch
- ☆52Updated this week
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆49Updated this week
- Minimal Mamba-2 implementation in PyTorch☆129Updated 4 months ago
- The open source implementation of the cross attention mechanism from the paper: "JOINTLY TRAINING LARGE AUTOREGRESSIVE MULTIMODAL MODELS"☆22Updated 8 months ago
- A repository for DenseSSMs☆88Updated 7 months ago
- Implementation of Agent Attention in Pytorch☆86Updated 4 months ago
- ☆41Updated 7 months ago
- My implementation of the original transformer model (Vaswani et al.). I've additionally included the playground.py file for visualizing o…☆41Updated 11 months ago
- Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficien…☆54Updated this week
- Implementation of a Light Recurrent Unit in Pytorch☆46Updated last month
- An open source community implementation of the model from "DIFFERENTIAL TRANSFORMER" paper by Microsoft.☆12Updated this week
- ☆118Updated 6 months ago
- Transformer model based on Kolmogorov–Arnold Network(KAN), which is an alternative of Multi-Layer Perceptron(MLP)☆24Updated last month
- State Space Models☆62Updated 6 months ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆100Updated 11 months ago
- First-principle implementations of groundbreaking AI algorithms using a wide range of deep learning frameworks, accompanied by supporting…☆66Updated 3 weeks ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆83Updated this week
- Implementation of xLSTM in Pytorch from the paper: "xLSTM: Extended Long Short-Term Memory"☆103Updated this week
- Simba☆182Updated 7 months ago
- ☆74Updated 4 months ago
- An efficient pytorch implementation of selective scan in one file, works with both cpu and gpu, with corresponding mathematical derivatio…☆71Updated 8 months ago
- Causal depthwise conv1d in CUDA, with a PyTorch interface☆320Updated 3 months ago
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆167Updated last week
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆31Updated last month
- Implementation of Qformer from BLIP2 in Zeta Lego blocks.☆31Updated last week
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆135Updated last week
- ☆179Updated 11 months ago
- RWKV-TS: Beyond Traditional Recurrent Neural Network for Time Series Tasks☆75Updated 2 months ago
- PyTorch implementation for HyperMixing, a linear-time token-mixing technique used in HyperMixer architecture☆21Updated last year
- Official repository of the IEEE SLT 2024 paper "Self-Supervised Syllable Discovery Based on Speaker-Disentangled HuBERT"☆28Updated 3 weeks ago