Lichang-Chen / InstructZeroLinks
Official Implementation of InstructZero; the first framework to optimize bad prompts of ChatGPT(API LLMs) and finally obtain good prompts!
☆196Updated last year
Alternatives and similar repositories for InstructZero
Users that are interested in InstructZero are comparing it to the libraries listed below
Sorting:
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆117Updated 2 years ago
- ☆136Updated 2 years ago
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆155Updated 2 years ago
- A set of utilities for running few-shot prompting experiments on large-language models☆126Updated 2 years ago
- Self-Alignment with Principle-Following Reward Models☆169Updated 2 months ago
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆163Updated last year
- Official repo of Respond-and-Respond: data, code, and evaluation☆104Updated last year
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆217Updated 2 years ago
- PASTA: Post-hoc Attention Steering for LLMs☆127Updated 11 months ago
- ☆313Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆89Updated last year
- ☆173Updated 2 years ago
- Code for Arxiv 2023: Improving Language Model Negociation with Self-Play and In-Context Learning from AI Feedback☆207Updated 2 years ago
- Code accompanying "How I learned to start worrying about prompt formatting".☆110Updated 5 months ago
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated 2 years ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆145Updated last year
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"