suzgunmirac / crowd-sampling
Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding
☆18Updated 2 years ago
Alternatives and similar repositories for crowd-sampling:
Users that are interested in crowd-sampling are comparing it to the libraries listed below
- TBC☆26Updated 2 years ago
- ☆58Updated 2 years ago
- ☆42Updated 4 years ago
- [NeurIPS 2023] Repetition In Repetition Out: Towards Understanding Neural Text Degeneration from the Data Perspective☆30Updated last year
- Code for the paper "Simulating Bandit Learning from User Feedback for Extractive Question Answering".☆18Updated 2 years ago
- ☆21Updated 2 years ago
- The Multitask Long Document Benchmark☆38Updated 2 years ago
- DEMix Layers for Modular Language Modeling☆53Updated 3 years ago
- The source code of our ACL paper "A Training-free and Reference-free Summarization Evaluation Metric via Centrality-weighted Relevance an…☆14Updated last year
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆56Updated 2 years ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Updated 2 years ago
- Code for ACL 2022 paper "Expanding Pretrained Models to Thousands More Languages via Lexicon-based Adaptation"☆30Updated 2 years ago
- ReCross: Unsupervised Cross-Task Generalization via Retrieval Augmentation☆24Updated 2 years ago
- ☆21Updated last year
- ☆34Updated 3 years ago
- ☆25Updated 2 years ago
- ☆17Updated last year
- ☆22Updated 3 years ago
- ☆12Updated 2 years ago
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆19Updated 7 months ago
- An Empirical Study On Contrastive Search And Contrastive Decoding For Open-ended Text Generation☆26Updated 9 months ago
- ☆45Updated 3 years ago
- Momentum Decoding: Open-ended Text Generation as Graph Exploration☆19Updated 2 years ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆37Updated 2 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆75Updated last year
- ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhi…☆49Updated 3 years ago
- ☆22Updated 2 years ago
- ☆41Updated 4 years ago
- Transfer Learning in Dialogue Benchmarking Toolkit☆14Updated last year
- ☆31Updated 8 months ago