inseq-team / inseqLinks
Interpretability for sequence generation models π π
β441Updated last month
Alternatives and similar repositories for inseq
Users that are interested in inseq are comparing it to the libraries listed below
Sorting:
- Tools for understanding how transformer predictions are built layer-by-layerβ532Updated 2 months ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventionsβ819Updated last week
- Materials for EACL2024 tutorial: Transformer-specific Interpretabilityβ60Updated last year
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasetsβ223Updated 11 months ago
- Repository for research in the field of Responsible NLP at Meta.β202Updated 5 months ago
- β369Updated this week
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"β457Updated 2 years ago
- How do transformer LMs encode relations?β55Updated last year
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.β179Updated 3 years ago
- Aligning AI With Shared Human Values (ICLR 2021)β303Updated 2 years ago
- Package to compute Mauve, a similarity score between neural text and human text. Install with `pip install mauve-text`.β298Updated last year
- This repository collects all relevant resources about interpretability in LLMsβ375Updated 11 months ago
- β279Updated last year
- Erasing concepts from neural representations with provable guaranteesβ238Updated 8 months ago
- A Python library that encapsulates various methods for neuron interpretation and analysis in Deep NLP models.β106Updated 2 years ago
- Datasets collection and preprocessings framework for NLP extreme multitask learningβ188Updated 3 months ago
- A python package for benchmarking interpretability techniques on Transformers.β213Updated last year
- A python package to run inference with HuggingFace language and vision-language checkpoints wrapping many convenient features.β28Updated last year
- PAIR.withgoogle.com and friend's work on interpretability methodsβ204Updated last month
- Seminar on Large Language Models (COMP790-101 at UNC Chapel Hill, Fall 2022)β311Updated 2 years ago
- β81Updated 7 months ago
- β247Updated last year
- Mechanistic Interpretability Visualizations using Reactβ293Updated 10 months ago
- A library for finding knowledge neurons in pretrained transformer models.β159Updated 3 years ago
- PyTorch + HuggingFace code for RetoMaton: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022), including anβ¦β280Updated 3 years ago
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging Fβ¦β574Updated last year
- Utilities for the HuggingFace transformers libraryβ72Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.β104Updated 2 years ago
- β234Updated last year
- The nnsight package enables interpreting and manipulating the internals of deep learned models.β683Updated last week