vicgalle / distilled-self-critiqueLinks
distilled Self-Critique refines the outputs of a LLM with only synthetic data
☆11Updated last year
Alternatives and similar repositories for distilled-self-critique
Users that are interested in distilled-self-critique are comparing it to the libraries listed below
Sorting:
- Code and Data for "Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering"☆87Updated last year
- Creative Instructions Project☆11Updated 2 years ago
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"☆61Updated 11 months ago
- Data and code for the paper "Inducing Positive Perspectives with Text Reframing"☆61Updated 2 years ago
- Code, datasets, models for the paper "Automatic Evaluation of Attribution by Large Language Models"☆56Updated 2 years ago
- ☆70Updated 2 years ago
- A extension of Transformers library to include T5ForSequenceClassification class.☆40Updated 2 years ago
- code associated with WANLI dataset in Liu et al., 2022☆31Updated 2 years ago
- Code and data for "Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation" (EMNLP 2023)☆65Updated 2 years ago
- This repository accompanies our paper “Do Prompt-Based Models Really Understand the Meaning of Their Prompts?”☆85Updated 3 years ago
- PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models☆111Updated 3 weeks ago
- The data and the PyTorch implementation for the models and experiments in the paper "Exploiting Asymmetry for Synthetic Training Data Gen…☆64Updated 2 years ago
- [ACL 2023] Few-shot Reranking for Multi-hop QA via Language Model Prompting☆27Updated 2 months ago
- ☆54Updated 2 years ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆137Updated last year
- Codebase for LLM story generation; updated version of https//github.com/yangkevin2/doc-story-generation☆87Updated 2 years ago
- Hercules: Attributable and Scalable Opinion Summarization (ACL 2023)☆21Updated 2 years ago
- Tk-Instruct is a Transformer model that is tuned to solve many NLP tasks by following instructions.☆183Updated 3 years ago
- ☆49Updated 2 years ago
- IntructIR, a novel benchmark specifically designed to evaluate the instruction following ability in information retrieval models. Our foc…☆31Updated last year
- An Empirical Study On Contrastive Search And Contrastive Decoding For Open-ended Text Generation☆27Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆41Updated 2 years ago
- ☆47Updated last year
- A dataset for training/evaluating Question Answering Retrieval models on ChatGPT responses with the possibility to training/evaluating on…☆141Updated last year
- SummScreen: A Dataset for Abstractive Screenplay Summarization (ACL 2022)☆39Updated 3 years ago
- A Human-LLM Collaborative Dataset for Generative Information-seeking with Attribution☆35Updated 2 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year
- The official repository for Efficient Long-Text Understanding Using Short-Text Models (Ivgi et al., 2022) paper☆70Updated 2 years ago
- Retrieval as Attention☆82Updated 3 years ago
- Code for ProtAugment: Unsupervised diverse short-texts paraphrasing for intent detection meta-learning☆24Updated 3 years ago