locuslab / llm-idiosyncrasiesLinks
Code release for "Idiosyncrasies in Large Language Models"
☆33Updated 4 months ago
Alternatives and similar repositories for llm-idiosyncrasies
Users that are interested in llm-idiosyncrasies are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Cheating Automatic LLM Benchmarks: Null Models Achieve High Win Rates (Oral)☆79Updated 8 months ago
- What do we learn from inverting CLIP models?☆55Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆95Updated 3 weeks ago
- ☆18Updated 4 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆89Updated 7 months ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- Improving Your Model Ranking on Chatbot Arena by Vote Rigging (ICML 2025)☆21Updated 4 months ago
- Erasing conceptual knowledge from language models through low-rank fine-tuning☆18Updated 3 months ago
- ☆26Updated 4 months ago
- PaCE: Parsimonious Concept Engineering for Large Language Models (NeurIPS 2024)☆37Updated 7 months ago
- ☆32Updated 5 months ago
- Codebase for Obfuscated Activations Bypass LLM Latent-Space Defenses☆20Updated 4 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆170Updated 2 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆109Updated last year
- ☆48Updated last year
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆69Updated last year
- Unofficial Implementation of Chain-of-Thought Reasoning Without Prompting☆32Updated last year
- ☆44Updated last year
- ☆73Updated 5 months ago
- ☆35Updated 6 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆75Updated 7 months ago
- Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.☆44Updated 4 months ago
- [COLING'25] Exploring Concept Depth: How Large Language Models Acquire Knowledge at Different Layers?☆79Updated 5 months ago
- AI Logging for Interpretability and Explainability🔬☆123Updated last year
- ☆12Updated 2 years ago
- NeurIPS'24 - LLM Safety Landscape☆22Updated 4 months ago
- Code for our paper "Decomposing The Dark Matter of Sparse Autoencoders"☆22Updated 4 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆30Updated 5 months ago
- Sparse Autoencoder Training Library☆52Updated last month
- ☆35Updated 2 years ago