Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE benchmark with subquadratic complexity in length (or without attention).
☆117Mar 16, 2024Updated last year
Alternatives and similar repositories for BiGS
Users that are interested in BiGS are comparing it to the libraries listed below
Sorting:
- source code for NAACL2022 main conference "Dynamic Programming in Rank Space: Scaling Structured Inference with Low-Rank HMMs and PCFGs"☆10Sep 26, 2022Updated 3 years ago
- Code for the paper "Understanding the Mechanics of SPIGOT: Surrogate Gradients for Latent Structure Learning"☆11May 5, 2021Updated 4 years ago
- ☆13Feb 7, 2023Updated 3 years ago
- Distributed transactions☆13Sep 19, 2019Updated 6 years ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆22Jun 30, 2025Updated 8 months ago
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Jan 4, 2024Updated 2 years ago
- Implementation and experiments for Partially Supervised NER via Expected Entity Ratio in TACL 2022☆14Nov 7, 2022Updated 3 years ago
- ☆58Jul 9, 2024Updated last year
- ☆15Mar 22, 2023Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Differentiable Perturb-and-Parse operator☆25Mar 7, 2019Updated 6 years ago
- ☆10Oct 2, 2024Updated last year
- ☆11Oct 11, 2023Updated 2 years ago
- Advanced Formal Language Theory (263-5352-00L; Frühjahr 2023)☆10Feb 21, 2023Updated 3 years ago
- A probabilitic model for contextual word representation. Accepted to ACL2023 Findings.☆25Oct 22, 2023Updated 2 years ago
- Implementation of Cascaded Head-colliding Attention (ACL'2021)☆11Sep 16, 2021Updated 4 years ago
- ☆12Dec 13, 2022Updated 3 years ago
- ☆14Feb 1, 2024Updated 2 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated last year
- Implementation of the LDP module block in PyTorch and Zeta from the paper: "MobileVLM: A Fast, Strong and Open Vision Language Assistant …☆15Mar 11, 2024Updated last year
- Code for "Discovering Non-monotonic Autoregressive Orderings with Variational Inference" (paper and code updated from ICLR 2021)☆12Mar 7, 2024Updated last year
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆64Jul 30, 2023Updated 2 years ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆170Jan 30, 2025Updated last year
- [ACL‘20] Highway Transformer: A Gated Transformer.☆33Dec 5, 2021Updated 4 years ago
- Easy trees in LaTeX and TikZ☆14Dec 16, 2022Updated 3 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- Leveraging Recursive Gumbel-Max Trick for Approximate Inference in Combinatorial Spaces, NeurIPS 2021☆14Dec 11, 2021Updated 4 years ago
- Convolutions for Sequence Modeling☆912Jun 13, 2024Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆248Jun 6, 2025Updated 8 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Dec 28, 2024Updated last year
- Official PyTorch (Lightning) implementation of the NeurIPS 2020 paper "Efficient Marginalization of Discrete and Structured Latent Variab…☆27May 3, 2021Updated 4 years ago
- ☆13Apr 15, 2024Updated last year
- Code for the ACL 2021 paper "Structural Guidance for Transformer Language Models"☆13Sep 17, 2025Updated 5 months ago
- Implementation of https://srush.github.io/annotated-s4☆512Jun 20, 2025Updated 8 months ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆202Jun 22, 2023Updated 2 years ago
- ☆33Oct 4, 2024Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year