shengliu66 / ICV
Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering
☆142Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for ICV
- Function Vectors in Large Language Models (ICLR 2024)☆116Updated 3 weeks ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆194Updated this week
- ☆78Updated last year
- ☆148Updated 9 months ago
- Code accompanying "How I learned to start worrying about prompt formatting".☆92Updated last month
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆96Updated 7 months ago
- A Survey on Data Selection for Language Models☆178Updated 3 weeks ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆84Updated 3 months ago
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆156Updated 6 months ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆59Updated 6 months ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆93Updated last month
- ☆111Updated last month
- ☆123Updated 6 months ago
- AI Logging for Interpretability and Explainability🔬☆87Updated 5 months ago
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆140Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆160Updated last month
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆124Updated 2 weeks ago
- PASTA: Post-hoc Attention Steering for LLMs☆107Updated 2 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆73Updated this week
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆212Updated last year
- ☆101Updated last month
- A simple unified framework for evaluating LLMs☆138Updated this week
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆78Updated 8 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆117Updated last month
- MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents [EMNLP 2024]☆100Updated 3 weeks ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning"☆91Updated 4 months ago
- Self-Alignment with Principle-Following Reward Models☆148Updated 8 months ago
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆109Updated 2 months ago
- Implementation of the paper: "Answering Questions by Meta-Reasoning over Multiple Chains of Thought"☆92Updated 9 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆69Updated 8 months ago