transformer-vq / transformer_vqView external linksLinks
☆201Dec 4, 2023Updated 2 years ago
Alternatives and similar repositories for transformer_vq
Users that are interested in transformer_vq are comparing it to the libraries listed below
Sorting:
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆14Aug 25, 2023Updated 2 years ago
- ☆14Nov 20, 2022Updated 3 years ago
- ☆53May 20, 2024Updated last year
- Advanced Formal Language Theory (263-5352-00L; Frühjahr 2023)☆10Feb 21, 2023Updated 2 years ago
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago
- ☆106Mar 9, 2024Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- AGaLiTe: Approximate Gated Linear Transformers for Online Reinforcement Learning (Published in TMLR)☆23Oct 15, 2024Updated last year
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- [ICCV 2025] SimVQ: Addressing Representation Collapse in Vector Quantized Models with One Linear Layer☆318Dec 29, 2024Updated last year
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆70Sep 25, 2024Updated last year
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- ☆13Feb 7, 2023Updated 3 years ago
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆28Sep 4, 2025Updated 5 months ago
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Dec 2, 2023Updated 2 years ago
- Triton Implementation of HyperAttention Algorithm☆48Dec 11, 2023Updated 2 years ago
- ☆45Apr 30, 2018Updated 7 years ago
- ☆51Jan 28, 2024Updated 2 years ago
- JAX/Flax implementation of the Hyena Hierarchy☆34Apr 27, 2023Updated 2 years ago
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Jan 4, 2024Updated 2 years ago
- ☆18Mar 10, 2023Updated 2 years ago
- Curse-of-memory phenomenon of RNNs in sequence modelling☆19May 8, 2025Updated 9 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆100Aug 18, 2024Updated last year
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆30Jan 28, 2026Updated 2 weeks ago
- A repository for research on medium sized language models.☆77May 23, 2024Updated last year
- ☆35Nov 22, 2024Updated last year
- Learning to Model Editing Processes☆26Aug 3, 2025Updated 6 months ago
- ☆20May 30, 2024Updated last year
- Some preliminary explorations of Mamba's context scaling.☆218Feb 8, 2024Updated 2 years ago
- Accelerated First Order Parallel Associative Scan☆196Jan 7, 2026Updated last month
- Stick-breaking attention☆62Jul 1, 2025Updated 7 months ago
- Token Omission Via Attention☆128Oct 13, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Official Code Repository for the paper "Key-value memory in the brain"☆31Feb 25, 2025Updated 11 months ago
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32May 25, 2024Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Aug 14, 2024Updated last year