inseq-team / inseqLinks
Interpretability for sequence generation models π π
β444Updated 2 weeks ago
Alternatives and similar repositories for inseq
Users that are interested in inseq are comparing it to the libraries listed below
Sorting:
- Stanford NLP Python library for understanding and improving PyTorch models via interventionsβ826Updated 3 weeks ago
- Tools for understanding how transformer predictions are built layer-by-layerβ539Updated 3 months ago
- β375Updated last week
- Repository for research in the field of Responsible NLP at Meta.β202Updated 5 months ago
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasetsβ223Updated 11 months ago
- Materials for EACL2024 tutorial: Transformer-specific Interpretabilityβ60Updated last year
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"β457Updated 2 years ago
- A Python library that encapsulates various methods for neuron interpretation and analysis in Deep NLP models.β106Updated 2 years ago
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.β180Updated 3 years ago
- Package to compute Mauve, a similarity score between neural text and human text. Install with `pip install mauve-text`.β299Updated last year
- Seminar on Large Language Models (COMP790-101 at UNC Chapel Hill, Fall 2022)β313Updated 2 years ago
- This repository collects all relevant resources about interpretability in LLMsβ379Updated last year
- Organize your experiments into discrete steps that can be cached and reused throughout the lifetime of your research project.β565Updated last year
- PAIR.withgoogle.com and friend's work on interpretability methodsβ209Updated last month
- A framework for few-shot evaluation of autoregressive language models.β104Updated 2 years ago
- Erasing concepts from neural representations with provable guaranteesβ239Updated 9 months ago
- Utilities for the HuggingFace transformers libraryβ72Updated 2 years ago
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging Fβ¦β575Updated 2 years ago
- A prize for finding tasks that cause large language models to show inverse scalingβ616Updated 2 years ago
- How do transformer LMs encode relations?β55Updated last year
- PyTorch + HuggingFace code for RetoMaton: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022), including anβ¦β282Updated 3 years ago
- ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.β150Updated 2 months ago
- β65Updated 2 years ago
- β281Updated last year
- Aligning AI With Shared Human Values (ICLR 2021)β303Updated 2 years ago
- Mechanistic Interpretability Visualizations using Reactβ299Updated 10 months ago
- A python package for benchmarking interpretability techniques on Transformers.β212Updated last year
- Datasets collection and preprocessings framework for NLP extreme multitask learningβ188Updated 4 months ago
- Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled corpus and yields massive improvement: "GPL: β¦β338Updated 2 years ago
- β252Updated last year