Luckfort / CDLinks
[COLING'25] Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?
☆78Updated 4 months ago
Alternatives and similar repositories for CD
Users that are interested in CD are comparing it to the libraries listed below
Sorting:
- What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆64Updated 3 months ago
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆49Updated 5 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆84Updated 7 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆89Updated last week
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆56Updated 2 months ago
- [ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correla…☆46Updated 3 weeks ago
- ☆37Updated last year
- Function Vectors in Large Language Models (ICLR 2024)☆167Updated last month
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆114Updated last year
- Official repository for Montessori-Instruct: Generate Influential Training Data Tailored for Student Learning [ICLR 2025]☆45Updated 4 months ago
- [ACL 2025] Knowledge Unlearning for Large Language Models☆32Updated last month
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆78Updated 7 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆35Updated 6 months ago
- ☆36Updated 2 months ago
- ☆26Updated last year
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆148Updated 3 months ago
- AnchorAttention: Improved attention for LLMs long-context training☆208Updated 4 months ago
- A Sober Look at Language Model Reasoning☆52Updated this week
- ☆51Updated last month
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆107Updated last year
- Long Context Extension and Generalization in LLMs☆56Updated 8 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆104Updated 2 months ago
- Official code for Guiding Language Model Math Reasoning with Planning Tokens☆11Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆88Updated last week
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆70Updated 2 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆58Updated 6 months ago
- ☆105Updated 2 months ago
- Codebase for Instruction Following without Instruction Tuning☆34Updated 8 months ago
- Collection of Reverse Engineering in Large Model☆32Updated 4 months ago