uclaml / Rephrase-and-RespondLinks
Official repo of Respond-and-Respond: data, code, and evaluation
☆103Updated last year
Alternatives and similar repositories for Rephrase-and-Respond
Users that are interested in Rephrase-and-Respond are comparing it to the libraries listed below
Sorting:
- Official Implementation of InstructZero; the first framework to optimize bad prompts of ChatGPT(API LLMs) and finally obtain good prompts…☆195Updated last year
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆101Updated 2 years ago
- Codebase accompanying the Summary of a Haystack paper.☆80Updated last year
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆156Updated 2 years ago
- Mixing Language Models with Self-Verification and Meta-Verification☆111Updated last year
- ☆162Updated last year
- Evaluating LLMs with CommonGen-Lite☆93Updated last year
- Code accompanying "How I learned to start worrying about prompt formatting".☆113Updated 7 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆120Updated 3 months ago
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆118Updated 2 years ago
- Evaluating LLMs with fewer examples☆169Updated last year
- [NeurIPS 2023] PyTorch code for Can Language Models Teach? Teacher Explanations Improve Student Performance via Theory of Mind☆66Updated 2 years ago
- ☆150Updated 2 years ago
- ☆122Updated last year
- Evaluating tool-augmented LLMs in conversation settings☆88Updated last year
- Code and data for "StructLM: Towards Building Generalist Models for Structured Knowledge Grounding" (COLM 2024)☆75Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆85Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆90Updated last year
- A set of utilities for running few-shot prompting experiments on large-language models☆126Updated 2 years ago
- ☆129Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆112Updated last year
- Official repo for "Make Your LLM Fully Utilize the Context"☆261Updated last year
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆142Updated 3 months ago
- ☆78Updated 2 years ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆145Updated last year
- Verifiers for LLM Reinforcement Learning☆80Updated 9 months ago
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆103Updated 5 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated 2 years ago