StevenZHB / CoT_Causal_AnalysisLinks
Repository of paper "How Likely Do LLMs with CoT Mimic Human Reasoning?"
☆23Updated 10 months ago
Alternatives and similar repositories for CoT_Causal_Analysis
Users that are interested in CoT_Causal_Analysis are comparing it to the libraries listed below
Sorting:
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆34Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- [ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correla…☆46Updated 7 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆112Updated 5 months ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆63Updated last year
- Data and code for the Corr2Cause paper (ICLR 2024)☆111Updated last year
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆61Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆39Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆132Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- [EMNLP 2024] A Peek into Token Bias: Large Language Models Are Not Yet Genuine Reasoners☆26Updated last year
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆123Updated last year
- [NAACL 2025] The official implementation of paper "Learning From Failure: Integrating Negative Examples when Fine-tuning Large Language M…☆29Updated last year
- ☆24Updated 8 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆68Updated last year
- ☆139Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆158Updated 6 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆41Updated last year
- ☆41Updated 2 years ago
- exploring whether LLMs perform case-based or rule-based reasoning☆30Updated last year
- ☆52Updated 8 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆115Updated 2 years ago
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆35Updated 10 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆125Updated last year
- ☆51Updated 10 months ago
- Directional Preference Alignment☆58Updated last year
- ☆21Updated 3 months ago
- Models, data, and codes for the paper: MetaAligner: Towards Generalizable Multi-Objective Alignment of Language Models☆24Updated last year
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Updated 9 months ago