gonglinyuan / metro_t0
Code repo for "Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers" (ACL 2023)
☆22Updated last year
Related projects ⓘ
Alternatives and complementary repositories for metro_t0
- Embedding Recycling for Language models☆38Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆44Updated last year
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆17Updated last month
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆40Updated 4 months ago
- ☆18Updated last year
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆41Updated 10 months ago
- Few-shot Learning with Auxiliary Data☆26Updated 11 months ago
- ☆15Updated 3 months ago
- Prompting Large Language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval☆38Updated 3 weeks ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆18Updated 2 months ago
- ☆38Updated 7 months ago
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆28Updated 5 months ago
- Tasks for describing differences between text distributions.☆16Updated 3 months ago
- ☆25Updated 11 months ago
- ☆11Updated 11 months ago
- Adding new tasks to T0 without catastrophic forgetting☆30Updated 2 years ago
- ☆46Updated this week
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆29Updated last week
- LTG-Bert☆29Updated 10 months ago
- ☆55Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆92Updated last year
- Can LLMs generate code-mixed sentences through zero-shot prompting?☆11Updated last year
- Transformers at any scale☆41Updated 10 months ago
- ☆27Updated 5 months ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆56Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆18Updated 3 months ago
- XTR: Rethinking the Role of Token Retrieval in Multi-Vector Retrieval☆37Updated 5 months ago