tianyang-x / SaySelfLinks
Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"
☆109Updated last year
Alternatives and similar repositories for SaySelf
Users that are interested in SaySelf are comparing it to the libraries listed below
Sorting:
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆135Updated last week
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated 9 months ago
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆153Updated last year
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆159Updated 8 months ago
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆190Updated 8 months ago
- PASTA: Post-hoc Attention Steering for LLMs☆125Updated 11 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆122Updated last year
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆82Updated last year
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Codes and datasets for the paper Measuring and Enhancing Trustworthiness of LLMs in RAG through Grounded Attributions and Learning to Ref…☆68Updated 7 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆107Updated 3 months ago
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- ☆128Updated last year
- ☆155Updated last year
- Code for EMNLP 2024 paper "Learn Beyond The Answer: Training Language Models with Reflection for Mathematical Reasoning"☆55Updated last year
- augmented LLM with self reflection☆132Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Evaluating LLMs with fewer examples☆163Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆96Updated last year
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆33Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆242Updated 11 months ago
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆117Updated 2 years ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆90Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆136Updated 4 months ago
- DocBench: A Benchmark for Evaluating LLM-based Document Reading Systems☆49Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆116Updated this week
- ☆102Updated 11 months ago
- ☆146Updated last week