Luckfort / CDLinks
[COLING'25] Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?
☆81Updated 9 months ago
Alternatives and similar repositories for CD
Users that are interested in CD are comparing it to the libraries listed below
Sorting:
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆74Updated 4 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆140Updated 4 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆40Updated last year
- Function Vectors in Large Language Models (ICLR 2024)☆183Updated 6 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆159Updated 8 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆122Updated last year
- Official repository for Montessori-Instruct: Generate Influential Training Data Tailored for Student Learning [ICLR 2025]☆48Updated 9 months ago
- ☆135Updated 2 months ago
- [ACL 2025] Knowledge Unlearning for Large Language Models☆46Updated last month
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆61Updated 3 months ago
- ☆51Updated 9 months ago
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆141Updated last month
- ☆131Updated 8 months ago
- A Sober Look at Language Model Reasoning☆87Updated last month
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆83Updated 7 months ago
- Official PyTorch Implementation of EMoE: Unlocking Emergent Modularity in Large Language Models [main conference @ NAACL2024]☆35Updated last year
- ☆41Updated 2 years ago
- AnchorAttention: Improved attention for LLMs long-context training☆213Updated 10 months ago
- Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"☆51Updated last month
- [ACL'24] Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models (LLMs). However, the correla…☆46Updated 6 months ago
- ☆197Updated 6 months ago
- ☆52Updated 7 months ago
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆53Updated 11 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆122Updated 7 months ago
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆33Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆121Updated last year
- Long Context Extension and Generalization in LLMs☆62Updated last year
- Reinforcing General Reasoning without Verifiers☆91Updated 4 months ago
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆121Updated 7 months ago
- ☆103Updated last year