mukhal / intrinsic-source-citationLinks
[COLM '24] Source-Aware Training Enables Knowledge Attribution in Language Models
☆19Updated 7 months ago
Alternatives and similar repositories for intrinsic-source-citation
Users that are interested in intrinsic-source-citation are comparing it to the libraries listed below
Sorting:
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆49Updated 2 years ago
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆31Updated last year
- Aioli: A unified optimization framework for language model data mixing☆31Updated 10 months ago
- Code for our paper Resources and Evaluations for Multi-Distribution Dense Information Retrieval☆15Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- ☆58Updated last year
- [ICLR 2023] Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners☆116Updated 5 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated last month
- ☆49Updated 7 months ago
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- ☆14Updated last year
- ☆39Updated last year
- An unofficial implementation of the Infini-gram model proposed by Liu et al. (2024)☆33Updated last year
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆29Updated last week
- Starbucks: Improved Training for 2D Matryoshka Embeddings☆22Updated 4 months ago
- ☆75Updated last year
- ☆53Updated last year
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆65Updated last year
- Dataset and evaluation suite enabling LLM instruction-following for scientific literature understanding.☆44Updated 8 months ago
- Embedding Recycling for Language models☆38Updated 2 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆59Updated 2 years ago
- [NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Lea…☆76Updated last year
- Interview-based evaluation of LLMs☆22Updated 10 months ago
- Finding semantically meaningful and accurate prompts.☆48Updated 2 years ago
- [ICLR'25] "Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers"☆36Updated 7 months ago
- ☆14Updated last year
- ☆54Updated 2 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated 2 years ago
- ☆30Updated last year
- ☆26Updated 9 months ago