ShuheSH / A-Survey-of-the-Reasoning-Abilities-of-LLMsLinks
☆26Updated 9 months ago
Alternatives and similar repositories for A-Survey-of-the-Reasoning-Abilities-of-LLMs
Users that are interested in A-Survey-of-the-Reasoning-Abilities-of-LLMs are comparing it to the libraries listed below
Sorting:
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆37Updated 6 months ago
- A Sober Look at Language Model Reasoning☆89Updated 3 weeks ago
- ☆42Updated last year
- ☆25Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆65Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆63Updated last year
- A curated list of resources on Reinforcement Learning with Verifiable Rewards (RLVR) and the reasoning capability boundary of Large Langu…☆83Updated last week
- Codebase for decoding compressed trust.☆25Updated last year
- ☆30Updated last year
- ☆41Updated 2 years ago
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆84Updated last year
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆49Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆63Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆64Updated last year
- ☆19Updated 7 months ago
- [EMNLP 2025 Main] ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆38Updated 3 months ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆23Updated 6 months ago
- [ACL 25] SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities☆26Updated 8 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆59Updated last year
- ☆68Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- [AAAI 2024] MELO: Enhancing Model Editing with Neuron-indexed Dynamic LoRA☆26Updated last year
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆87Updated 9 months ago
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆46Updated 8 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆133Updated 8 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆37Updated 4 months ago
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆28Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆67Updated last year