logix-project / logixLinks
AI Logging for Interpretability and Explainability🔬
☆124Updated last year
Alternatives and similar repositories for logix
Users that are interested in logix are comparing it to the libraries listed below
Sorting:
- ☆97Updated last year
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆75Updated 11 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆77Updated 6 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆128Updated 2 months ago
- [ICLR 2025] General-purpose activation steering library☆102Updated 2 weeks ago
- ☆55Updated 2 years ago
- ☆51Updated 4 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆63Updated 9 months ago
- ☆26Updated 7 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆179Updated 4 months ago
- For OpenMOSS Mechanistic Interpretability Team's Sparse Autoencoder (SAE) research.☆146Updated this week
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆79Updated 8 months ago
- ☆48Updated last year
- ☆50Updated last year
- ☆38Updated last year
- Steering Llama 2 with Contrastive Activation Addition☆180Updated last year
- ☆166Updated 9 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆41Updated 7 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆99Updated last week
- A Survey on Data Selection for Language Models☆247Updated 4 months ago
- Providing the answer to "How to do patching on all available SAEs on GPT-2?". It is an official repository of the implementation of the p…☆12Updated 7 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆110Updated 6 months ago
- A fast, effective data attribution method for neural networks in PyTorch☆217Updated 9 months ago
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"☆39Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆53Updated 11 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆118Updated last year
- `dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.☆84Updated 3 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆78Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆122Updated 9 months ago
- ☆29Updated last year