Skytliang / COT-Reading-ListLinks
☆27Updated 2 years ago
Alternatives and similar repositories for COT-Reading-List
Users that are interested in COT-Reading-List are comparing it to the libraries listed below
Sorting:
- ☆47Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆83Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆116Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆59Updated 9 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆165Updated last year
- ☆26Updated 2 years ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆138Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- self-adaptive in-context learning☆45Updated 2 years ago
- ☆75Updated last year
- my commonly-used tools☆61Updated 9 months ago
- Code and Results of the Paper: On the Reliability of Psychological Scales on Large Language Models☆30Updated last year
- ☆21Updated 9 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- The implement of ACL2024: "MAPO: Advancing Multilingual Reasoning through Multilingual Alignment-as-Preference Optimization"☆42Updated last year
- The information of NLP PhD application in the world.☆37Updated last year
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆27Updated last year
- Code and data for "ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM" (NeurIPS 2024 Track Datasets and…☆51Updated 4 months ago
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆49Updated last year
- Do Large Language Models Know What They Don’t Know?☆99Updated 10 months ago
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Models☆56Updated last year
- ☆64Updated 2 years ago
- A repository of useful research/skill-upgrading talks or acticles in NLP/CV/AI Area (in Chinese).☆85Updated last year
- Official code for paper "SPA-RL: Reinforcing LLM Agent via Stepwise Progress Attribution"☆43Updated 3 weeks ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- A method of ensemble learning for heterogeneous large language models.☆61Updated last year
- Dialogue Planning via Brownian Bridge Stochastic Process for Goal-directed Proactive Dialogue (ACL Findings 2023)☆22Updated last year
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- Safety-J: Evaluating Safety with Critique☆16Updated last year
- This the implementation of LeCo☆31Updated 8 months ago