theblackcat102 / unify-learning-paradigmsLinks
data collator for UL2 and U-PaLM
☆29Updated 2 years ago
Alternatives and similar repositories for unify-learning-paradigms
Users that are interested in unify-learning-paradigms are comparing it to the libraries listed below
Sorting:
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆113Updated 3 months ago
- Interpreting Language Models with Contrastive Explanations (EMNLP 2022 Best Paper Honorable Mention)☆62Updated 3 years ago
- Few-shot Learning with Auxiliary Data☆31Updated 2 years ago
- [TMLR'23] Contrastive Search Is What You Need For Neural Text Generation☆123Updated 2 years ago
- ☆22Updated 3 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- This is the implementation of the paper AdaMix: Mixture-of-Adaptations for Parameter-efficient Model Tuning (https://arxiv.org/abs/2205.1…☆136Updated 2 years ago
- The official code of EMNLP 2022, "SCROLLS: Standardized CompaRison Over Long Language Sequences".☆69Updated 2 years ago
- Official code and model checkpoints for our EMNLP 2022 paper "RankGen - Improving Text Generation with Large Ranking Models" (https://arx…☆138Updated 2 years ago
- Embedding Recycling for Language models☆38Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆106Updated 2 years ago
- Code for Zero-Shot Tokenizer Transfer☆142Updated last year
- PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models☆111Updated last month
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆143Updated 3 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- The original Backpack Language Model implementation, a fork of FlashAttention☆71Updated 2 years ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆231Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 3 years ago
- ☆65Updated 2 years ago
- SILO Language Models code repository☆83Updated last year
- ☆55Updated last year
- A library for parameter-efficient and composable transfer learning for NLP with sparse fine-tunings.☆75Updated last year
- Official implementation of "BERTs are Generative In-Context Learners"☆32Updated 10 months ago
- The official repository for Efficient Long-Text Understanding Using Short-Text Models (Ivgi et al., 2022) paper☆70Updated 2 years ago
- Transformers at any scale☆42Updated 2 years ago
- Reverse Instructions to generate instruction tuning data with corpus examples☆216Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆99Updated 4 years ago
- Simple and scalable tools for data-driven pretraining data selection.☆29Updated 7 months ago