Hritikbansal / sparse_feedbackLinks
☆29Updated last year
Alternatives and similar repositories for sparse_feedback
Users that are interested in sparse_feedback are comparing it to the libraries listed below
Sorting:
- Flacuna was developed by fine-tuning Vicuna on Flan-mini, a comprehensive instruction collection encompassing various tasks. Vicuna is al…☆111Updated last year
- Code for this paper "HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts via HyperNetwork"☆33Updated last year
- [NeurIPS 2023] PyTorch code for Can Language Models Teach? Teacher Explanations Improve Student Performance via Theory of Mind☆66Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated last year
- Official repo for EMNLP 2023 paper "Explain-then-Translate: An Analysis on Improving Program Translation with Self-generated Explanations…☆29Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆116Updated last year
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning☆35Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 10 months ago
- Official implementation for "Law of the Weakest Link: Cross capabilities of Large Language Models"☆42Updated 9 months ago
- Code and data from the paper 'Human Feedback is not Gold Standard'☆19Updated last year
- ☆45Updated 3 months ago
- Official implementation of "Gemini in Reasoning: Unveiling Commonsense in Multimodal Large Language Models"☆36Updated last year
- This is the oficial repository for "Safer-Instruct: Aligning Language Models with Automated Preference Data"☆17Updated last year
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated 2 years ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆48Updated 7 months ago
- ☆16Updated 11 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 5 months ago
- Codebase for Instruction Following without Instruction Tuning☆35Updated 9 months ago
- ☆27Updated 2 weeks ago
- List of papers on Self-Correction of LLMs.☆73Updated 6 months ago
- The official implementation of Self-Exploring Language Models (SELM)☆64Updated last year
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆28Updated last year
- Source code for our paper: "Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction A…☆45Updated last year
- Backtracing: Retrieving the Cause of the Query, EACL 2024 Long Paper, Findings.☆89Updated 11 months ago
- ☆20Updated 3 months ago
- Code and data for CoachLM, an automatic instruction revision approach LLM instruction tuning.☆60Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- [NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Lea…☆74Updated last year
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆128Updated 11 months ago