gonglinyuan / metro_t0Links
Code repo for "Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers" (ACL 2023)
☆22Updated last year
Alternatives and similar repositories for metro_t0
Users that are interested in metro_t0 are comparing it to the libraries listed below
Sorting:
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Embedding Recycling for Language models☆38Updated 2 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated 3 weeks ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- ☆22Updated 9 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated last year
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.☆47Updated 2 years ago
- Experiments for efforts to train a new and improved t5☆75Updated last year
- Supercharge huggingface transformers with model parallelism.☆77Updated 3 months ago
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 3 months ago
- Code repository for the c-BTM paper☆107Updated 2 years ago
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆64Updated 2 years ago
- ☆44Updated 11 months ago
- ☆58Updated last year
- ☆39Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆39Updated 11 months ago
- Few-shot Learning with Auxiliary Data☆31Updated last year
- Trully flash implementation of DeBERTa disentangled attention mechanism.☆66Updated 3 weeks ago
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆32Updated last year
- Utilities for Training Very Large Models☆58Updated last year
- Official implementation of "BERTs are Generative In-Context Learners"☆32Updated 7 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆90Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆62Updated 2 years ago
- ☆76Updated last year
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆22Updated 3 months ago
- ☆23Updated 2 months ago
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆47Updated 3 months ago
- ☆69Updated last year
- ☆16Updated 6 months ago