inseq-team / inseqLinks
Interpretability for sequence generation models π π
β451Updated last week
Alternatives and similar repositories for inseq
Users that are interested in inseq are comparing it to the libraries listed below
Sorting:
- Tools for understanding how transformer predictions are built layer-by-layerβ559Updated 5 months ago
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasetsβ225Updated last year
- Stanford NLP Python library for understanding and improving PyTorch models via interventionsβ849Updated 2 months ago
- Materials for EACL2024 tutorial: Transformer-specific Interpretabilityβ61Updated last year
- Package to compute Mauve, a similarity score between neural text and human text. Install with `pip install mauve-text`.β306Updated last year
- Repository for research in the field of Responsible NLP at Meta.β204Updated 7 months ago
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.β180Updated 3 years ago
- β414Updated this week
- This repository collects all relevant resources about interpretability in LLMsβ389Updated last year
- PAIR.withgoogle.com and friend's work on interpretability methodsβ217Updated last month
- A Python library that encapsulates various methods for neuron interpretation and analysis in Deep NLP models.β106Updated 2 years ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"β457Updated 2 years ago
- Erasing concepts from neural representations with provable guaranteesβ242Updated 11 months ago
- Datasets collection and preprocessings framework for NLP extreme multitask learningβ189Updated 6 months ago
- Aligning AI With Shared Human Values (ICLR 2021)β311Updated 2 years ago
- Utilities for the HuggingFace transformers libraryβ73Updated 2 years ago
- PyTorch + HuggingFace code for RetoMaton: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022), including anβ¦β285Updated 3 years ago
- StereoSet: Measuring stereotypical bias in pretrained language modelsβ194Updated 3 years ago
- β283Updated last year
- ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.β153Updated 4 months ago
- How do transformer LMs encode relations?β55Updated last year
- A framework for few-shot evaluation of autoregressive language models.β105Updated 2 years ago
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging Fβ¦β576Updated 2 years ago
- Seminar on Large Language Models (COMP790-101 at UNC Chapel Hill, Fall 2022)β313Updated 3 years ago
- Calculate perplexity on a text with pre-trained language models. Support MLM (eg. DeBERTa), recurrent LM (eg. GPT3), and encoder-decoder β¦β165Updated 6 months ago
- Organize your experiments into discrete steps that can be cached and reused throughout the lifetime of your research project.β565Updated last year
- Sparse probing paper full code.β66Updated 2 years ago
- β83Updated 10 months ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)β464Updated 3 years ago
- β244Updated last year