Shark-NLP / CABLinks
☆31Updated 2 years ago
Alternatives and similar repositories for CAB
Users that are interested in CAB are comparing it to the libraries listed below
Sorting:
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆50Updated 3 years ago
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆14Updated 2 years ago
- Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding☆18Updated 2 years ago
- This repository is the official implementation of our EMNLP 2022 paper ELMER: A Non-Autoregressive Pre-trained Language Model for Efficie…☆26Updated 3 years ago
- ☆19Updated 3 years ago
- ☆54Updated last year
- DEMix Layers for Modular Language Modeling☆54Updated 4 years ago
- ☆20Updated 4 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated last year
- ☆54Updated 2 years ago
- Language modeling via stochastic processes. Oral @ ICLR 2022.☆138Updated 2 years ago
- [NeurIPS 2023] Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective☆37Updated 2 years ago
- ☆13Updated 2 years ago
- ☆51Updated 2 years ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆47Updated last year
- Retrieval as Attention☆82Updated 2 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- [EMNLP'23] Code for "Non-autoregressive Text Editing with Copy-aware Latent Alignments".☆20Updated 2 years ago
- ☆15Updated 2 years ago
- Codebase for Hyperdecoders https://arxiv.org/abs/2203.08304☆13Updated 3 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆66Updated 2 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- A probabilitic model for contextual word representation. Accepted to ACL2023 Findings.☆25Updated 2 years ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated last year
- Staged Training for Transformer Language Models☆33Updated 3 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆114Updated 3 years ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆142Updated 3 years ago
- The original Backpack Language Model implementation, a fork of FlashAttention☆69Updated 2 years ago