tianyang-x / SaySelfLinks
Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"
☆109Updated last year
Alternatives and similar repositories for SaySelf
Users that are interested in SaySelf are comparing it to the libraries listed below
Sorting:
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆141Updated last month
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆156Updated 2 years ago
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆193Updated 9 months ago
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆56Updated last year
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆159Updated 3 weeks ago
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated 11 months ago
- PASTA: Post-hoc Attention Steering for LLMs☆129Updated last year
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆123Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆112Updated 4 months ago
- ☆157Updated last year
- ☆129Updated last year
- ☆139Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆84Updated last year
- augmented LLM with self reflection☆135Updated 2 years ago
- Functional Benchmarks and the Reasoning Gap☆90Updated last year
- ☆157Updated last month
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆68Updated 9 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆48Updated 10 months ago
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆34Updated last year
- ☆150Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- ☆100Updated last year
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆164Updated last year
- Official repository for Montessori-Instruct: Generate Influential Training Data Tailored for Student Learning [ICLR 2025]☆50Updated 10 months ago
- ☆52Updated 6 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆150Updated 5 months ago