VectorInstitute / vectorlmLinks
LLM finetuning in resource-constrained environments.
☆47Updated last year
Alternatives and similar repositories for vectorlm
Users that are interested in vectorlm are comparing it to the libraries listed below
Sorting:
- Efficient LLM inference on Slurm clusters using vLLM.☆65Updated last week
- AI Logging for Interpretability and Explainability🔬☆123Updated last year
- ☆69Updated 3 years ago
- Experiments and code to generate the GINC small-scale in-context learning dataset from "An Explanation for In-context Learning as Implici…☆106Updated last year
- Influence Functions with (Eigenvalue-corrected) Kronecker-Factored Approximate Curvature☆156Updated last week
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆70Updated 8 months ago
- nanoGPT-like codebase for LLM training☆98Updated last month
- ☆95Updated last year
- NeuroSurgeon is a package that enables researchers to uncover and manipulate subnetworks within models in Huggingface Transformers☆41Updated 4 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆27Updated last year
- A library for efficient patching and automatic circuit discovery.☆67Updated 2 months ago
- ☆95Updated 4 months ago
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆35Updated last year
- ☆121Updated last year
- ☆101Updated 3 weeks ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆108Updated 4 months ago
- `dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.☆77Updated 2 weeks ago
- ☆74Updated last year
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆102Updated 2 years ago
- Materials for EACL2024 tutorial: Transformer-specific Interpretability☆55Updated last year
- General-purpose activation steering library☆78Updated last month
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 3 months ago
- ☆26Updated 4 months ago
- A fast, effective data attribution method for neural networks in PyTorch☆211Updated 7 months ago
- Codebase for Linguistic Collapse: Neural Collapse in (Large) Language Models [NeurIPS 2024] [arXiv:2405.17767]☆13Updated 2 months ago
- ☆60Updated 3 years ago
- ☆180Updated last year
- official code for paper Probing the Decision Boundaries of In-context Learning in Large Language Models. https://arxiv.org/abs/2406.11233…☆18Updated 9 months ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆47Updated 8 months ago
- ☆77Updated 4 months ago