facebookresearch / QincoLinks
Residual Quantization with Implicit Neural Codebooks
☆108Updated 3 months ago
Alternatives and similar repositories for Qinco
Users that are interested in Qinco are comparing it to the libraries listed below
Sorting:
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning☆134Updated 3 weeks ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆132Updated 2 months ago
- Triton implement of bi-directional (non-causal) linear attention☆60Updated 11 months ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆70Updated last year
- Implementation of a multimodal diffusion transformer in Pytorch☆107Updated last year
- Implementation of the proposed DeepCrossAttention by Heddes et al at Google research, in Pytorch☆96Updated 10 months ago
- Code repository for the public reproduction of the language modelling experiments on "MatFormer: Nested Transformer for Elastic Inference…☆30Updated 2 years ago
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Updated 8 months ago
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆41Updated 2 weeks ago
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆134Updated 2 months ago
- Easily run PyTorch on multiple GPUs & machines☆56Updated last month
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Updated 2 years ago
- ResiDual: Transformer with Dual Residual Connections, https://arxiv.org/abs/2304.14802☆97Updated 2 years ago
- Implementation of the proposed MaskBit from Bytedance AI☆83Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆56Updated last year
- ☆263Updated 7 months ago
- [NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections☆21Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆46Updated last year
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆59Updated 9 months ago
- [ICLR 2023] Official implementation of Transnormer in our ICLR 2023 paper - Toeplitz Neural Network for Sequence Modeling☆81Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆124Updated this week
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 9 months ago
- flex-block-attn: an efficient block sparse attention computation library☆102Updated 2 weeks ago
- Implementation of Agent Attention in Pytorch☆93Updated last year
- [ICML 2025] Fourier Position Embedding: Enhancing Attention’s Periodic Extension for Length Generalization☆105Updated 7 months ago
- ☆102Updated 10 months ago
- ☆56Updated 2 years ago