erwanplantec / LNDP
☆43Updated 2 months ago
Related projects: ⓘ
- ☆48Updated 3 months ago
- Triton Implementation of HyperAttention Algorithm☆46Updated 9 months ago
- PyTorch implementation of models from the Zamba2 series.☆63Updated last month
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆74Updated 7 months ago
- Here we will test various linear attention designs.☆55Updated 4 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆34Updated 10 months ago
- ☆21Updated 2 weeks ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆61Updated last week
- GoldFinch and other hybrid transformer components☆38Updated 2 months ago
- Efficient World Models with Context-Aware Tokenization. ICML 2024☆73Updated 2 months ago
- Implementation of Soft Actor Critic and some of its improvements in Pytorch☆30Updated this week
- A repository for research on medium sized language models.☆71Updated 3 months ago
- Evaluating the Mamba architecture on the Othello game☆41Updated 4 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆29Updated 3 weeks ago
- ☆42Updated this week
- ☆50Updated last month
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆37Updated 3 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆105Updated 3 weeks ago
- ☆22Updated 10 months ago
- ☆41Updated 2 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆84Updated 4 months ago
- Linear Attention Sequence Parallelism (LASP)☆64Updated 3 months ago
- ☆42Updated 7 months ago
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆27Updated last month
- A State-Space Model with Rational Transfer Function Representation.☆61Updated 4 months ago
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆35Updated last month
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆87Updated 8 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆86Updated 3 months ago
- Official Implementation Of The Paper: `DeciMamba: Exploring the Length Extrapolation Potential of Mamba'☆18Updated last month
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆101Updated last year