orhonovich / instruction-inductionLinks
☆65Updated 2 years ago
Alternatives and similar repositories for instruction-induction
Users that are interested in instruction-induction are comparing it to the libraries listed below
Sorting:
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆159Updated last year
- ☆85Updated 2 years ago
- ☆50Updated last year
- ☆44Updated 8 months ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆76Updated 2 years ago
- Inspecting and Editing Knowledge Representations in Language Models☆116Updated last year
- ☆97Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆117Updated 6 months ago
- [ICML 2023] Code for our paper “Compositional Exemplars for In-context Learning”.☆100Updated 2 years ago
- ☆28Updated last year
- Repository for Decomposed Prompting☆90Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆40Updated 2 years ago
- ☆61Updated 2 years ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆130Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆95Updated last year
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆60Updated 2 years ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆65Updated 2 years ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆111Updated 10 months ago
- A Survey of Hallucination in Large Foundation Models☆54Updated last year
- Self-Alignment with Principle-Following Reward Models☆161Updated 3 weeks ago
- Code for RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs. ACL 2023.☆63Updated 6 months ago
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆25Updated 9 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated last week
- ☆137Updated last year
- About The corresponding code from our paper " REFINER: Reasoning Feedback on Intermediate Representations" (EACL 2024). Do not hesitate t…☆70Updated last year
- Supporting code for ReCEval paper☆28Updated 8 months ago
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- Code for ACL2023 paper: Pre-Training to Learn in Context☆108Updated 10 months ago