DAMO-NLP-SG / contrastive-cotLinks
Contrastive Chain-of-Thought Prompting
☆63Updated last year
Alternatives and similar repositories for contrastive-cot
Users that are interested in contrastive-cot are comparing it to the libraries listed below
Sorting:
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- [arXiv preprint] Official Repository for "Evaluating Language Models as Synthetic Data Generators"☆33Updated 6 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆69Updated last year
- [ICLR'24 spotlight] Tool-Augmented Reward Modeling☆50Updated 2 weeks ago
- [NeurIPS 2024] Train LLMs with diverse system messages reflecting individualized preferences to generalize to unseen system messages☆48Updated 6 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated last month
- ☆12Updated last year
- ☆98Updated last year
- Code and data for "MT-Eval: A Multi-Turn Capabilities Evaluation Benchmark for Large Language Models"☆41Updated 8 months ago
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆62Updated 11 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆114Updated 11 months ago
- Code and data for the FACTOR paper☆47Updated last year
- Code and data for paper "Context-faithful Prompting for Large Language Models".☆40Updated 2 years ago
- ☆41Updated last year
- Source code of "Reasons to Reject? Aligning Language Models with Judgments"☆58Updated last year
- ☆54Updated 10 months ago
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆98Updated 4 months ago
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆53Updated 9 months ago
- code for Preprint paper at Arxiv: MoT: Pre-thinking and Recalling Enable ChatGPT to Self-Improve with Memory-of-Thoughts☆20Updated last year
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆26Updated last year
- [NAACL 2024] Making Language Models Better Tool Learners with Execution Feedback☆41Updated last year
- ☆72Updated last year
- Implementation of the paper: "Making Retrieval-Augmented Language Models Robust to Irrelevant Context"☆69Updated 10 months ago
- We have released the code and demo program required for LLM with self-verification☆60Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".