jungokasai / T2RLinks
☆14Updated 3 years ago
Alternatives and similar repositories for T2R
Users that are interested in T2R are comparing it to the libraries listed below
Sorting:
- triton ver of gqa flash attn, based on the tutorial☆12Updated last year
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆14Updated 2 years ago
- ☆20Updated last year
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Updated 3 years ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Updated 2 years ago
- Open-sourcing code associated with the AAAI-25 paper "On the Expressiveness and Length Generalization of Selective State-Space Models on …☆15Updated 2 months ago
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆26Updated 3 years ago
- ☆21Updated 2 years ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆29Updated 2 months ago
- A Structured Span Selector (NAACL 2022). A structured span selector with a WCFG for span selection tasks (coreference resolution, semanti…☆21Updated 3 years ago
- ☆31Updated 2 years ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆39Updated last year
- ☆19Updated 3 years ago
- Efficient PScan implementation in PyTorch☆17Updated last year
- ☆57Updated last year
- A probabilitic model for contextual word representation. Accepted to ACL2023 Findings.☆25Updated 2 years ago
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Updated last year
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Updated 5 years ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- ☆13Updated 2 years ago
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆69Updated 4 years ago
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- ☆16Updated 2 years ago
- ☆22Updated last year
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆63Updated 2 years ago
- Transformers at any scale☆41Updated last year
- Staged Training for Transformer Language Models☆33Updated 3 years ago
- ☆18Updated 2 years ago
- Implementation of Cascaded Head-colliding Attention (ACL'2021)☆11Updated 4 years ago