Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE benchmark with subquadratic complexity in length (or without attention).
☆118Mar 16, 2024Updated 2 years ago
Alternatives and similar repositories for BiGS
Users that are interested in BiGS are comparing it to the libraries listed below
Sorting:
- Distributed transactions☆13Sep 19, 2019Updated 6 years ago
- source code for NAACL2022 main conference "Dynamic Programming in Rank Space: Scaling Structured Inference with Low-Rank HMMs and PCFGs"☆10Sep 26, 2022Updated 3 years ago
- Recursive Bayesian Networks☆11May 11, 2025Updated 10 months ago
- Code for the paper "Understanding the Mechanics of SPIGOT: Surrogate Gradients for Latent Structure Learning"☆11May 5, 2021Updated 4 years ago
- Deep Autoencoding Predictive Components☆10Mar 4, 2021Updated 5 years ago
- ☆13Feb 7, 2023Updated 3 years ago
- Implementation and experiments for Partially Supervised NER via Expected Entity Ratio in TACL 2022☆14Nov 7, 2022Updated 3 years ago
- Differentiable Perturb-and-Parse operator☆25Mar 7, 2019Updated 7 years ago
- ☆11Oct 11, 2023Updated 2 years ago
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Jan 4, 2024Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- ☆15Mar 22, 2023Updated 2 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- ☆58Jul 9, 2024Updated last year
- Implementation of https://srush.github.io/annotated-s4☆513Jun 20, 2025Updated 9 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆238Oct 14, 2025Updated 5 months ago
- ☆12Dec 13, 2022Updated 3 years ago
- Fast and Modularized CFG-focused Models☆23Nov 8, 2023Updated 2 years ago
- ☆10Oct 2, 2024Updated last year
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆22Jun 30, 2025Updated 8 months ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- Implementation of the LDP module block in PyTorch and Zeta from the paper: "MobileVLM: A Fast, Strong and Open Vision Language Assistant …☆15Mar 11, 2024Updated 2 years ago
- A probabilitic model for contextual word representation. Accepted to ACL2023 Findings.☆25Oct 22, 2023Updated 2 years ago
- Code for "Discovering Non-monotonic Autoregressive Orderings with Variational Inference" (paper and code updated from ICLR 2021)☆12Mar 7, 2024Updated 2 years ago
- ☆14Feb 1, 2024Updated 2 years ago
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Implementation of ICML 22 Paper: Scaling Structured Inference with Randomization☆14Jul 24, 2022Updated 3 years ago
- ☆19Dec 4, 2025Updated 3 months ago
- Advanced Formal Language Theory (263-5352-00L; Frühjahr 2023)☆10Feb 21, 2023Updated 3 years ago
- ☆13Apr 15, 2024Updated last year
- Convolutions for Sequence Modeling☆912Jun 13, 2024Updated last year
- [ACL‘20] Highway Transformer: A Gated Transformer.☆33Dec 5, 2021Updated 4 years ago
- Implementation of Cascaded Head-colliding Attention (ACL'2021)☆11Sep 16, 2021Updated 4 years ago
- Official PyTorch (Lightning) implementation of the NeurIPS 2020 paper "Efficient Marginalization of Discrete and Structured Latent Variab…☆27May 3, 2021Updated 4 years ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆249Jun 6, 2025Updated 9 months ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆64Jul 30, 2023Updated 2 years ago
- Gradient Estimation with Discrete Stein Operators (NeurIPS 2022)☆17Nov 14, 2023Updated 2 years ago