mukhal / intrinsic-source-citationLinks
[COLM '24] Source-Aware Training Enables Knowledge Attribution in Language Models
☆19Updated 7 months ago
Alternatives and similar repositories for intrinsic-source-citation
Users that are interested in intrinsic-source-citation are comparing it to the libraries listed below
Sorting:
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆31Updated last year
- Code for our paper Resources and Evaluations for Multi-Distribution Dense Information Retrieval☆15Updated last year
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆90Updated last year
- ☆48Updated 7 months ago
- Aioli: A unified optimization framework for language model data mixing☆28Updated 9 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last month
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 4 months ago
- ☆58Updated last year
- Transformers at any scale☆41Updated last year
- ☆17Updated 6 months ago
- ☆13Updated last year
- Official repo for NAACL 2024 Findings paper "LeTI: Learning to Generate from Textual Interactions."☆65Updated 2 years ago
- ☆53Updated last year
- [NAACL 2024] Struc-Bench: Are Large Language Models Good at Generating Complex Structured Tabular Data? https://aclanthology.org/2024.naa…☆55Updated 3 months ago
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆94Updated 2 years ago
- Enhaced version of Wikiextrator: A wikipedia dumps extractor☆22Updated last month
- ☆26Updated 8 months ago
- [ICLR'25] "Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers"☆36Updated 7 months ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆99Updated 2 years ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 3 years ago
- ☆54Updated 2 years ago
- ☆74Updated last year
- Dataset and evaluation suite enabling LLM instruction-following for scientific literature understanding.☆44Updated 7 months ago
- Measuring and Controlling Persona Drift in Language Model Dialogs☆19Updated last year
- Finding semantically meaningful and accurate prompts.☆48Updated 2 years ago
- Based on the tree of thoughts paper☆48Updated 2 years ago
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆22Updated 4 months ago
- This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".☆30Updated last year