jungokasai / T2RLinks
☆14Updated 2 years ago
Alternatives and similar repositories for T2R
Users that are interested in T2R are comparing it to the libraries listed below
Sorting:
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆13Updated last year
- [EMNLP 2022] Language Model Pre-Training with Sparse Latent Typing☆14Updated 2 years ago
- triton ver of gqa flash attn, based on the tutorial☆11Updated 10 months ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Updated 2 years ago
- ☆31Updated last year
- ☆20Updated last year
- A Structured Span Selector (NAACL 2022). A structured span selector with a WCFG for span selection tasks (coreference resolution, semanti…☆21Updated 2 years ago
- Source code for <Sequence-Level Training for Non-Autoregressive Neural Machine Translation>.☆24Updated 3 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆57Updated 2 years ago
- [ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)☆22Updated 2 years ago
- Use the tokenizer in parallel to achieve superior acceleration☆16Updated last year
- Repo for "Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks" ACL 2023 Findings☆16Updated 2 years ago
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆26Updated 2 years ago
- [NeurIPS 2023] Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective☆31Updated last year
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Updated 2 years ago
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆11Updated 6 months ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Updated 2 years ago
- ☆18Updated 2 years ago
- Transformers at any scale☆41Updated last year
- Task Compass: Scaling Multi-task Pre-training with Task Prefix (EMNLP 2022: Findings) (stay tuned & more will be updated)☆22Updated 2 years ago
- ☆31Updated last year
- A probabilitic model for contextual word representation. Accepted to ACL2023 Findings.☆23Updated last year
- An Empirical Study On Contrastive Search And Contrastive Decoding For Open-ended Text Generation☆27Updated 11 months ago
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆29Updated last week
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- lanmt ebm☆12Updated 4 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆33Updated 3 years ago
- Sparse Backpropagation for Mixture-of-Expert Training☆29Updated 11 months ago
- Code for "Mixed Cross Entropy Loss for Neural Machine Translation"☆20Updated 3 years ago