alisawuffles / tokenizer-attackLinks
Official implementation of "Data Mixture Inference: What do BPE tokenizers reveal about their training data?"
☆16Updated 5 months ago
Alternatives and similar repositories for tokenizer-attack
Users that are interested in tokenizer-attack are comparing it to the libraries listed below
Sorting:
- This is the repo for constructing a comprehensive and rigorous evaluation framework for LLM calibration.☆13Updated last year
- https://footprints.baulab.info☆17Updated last year
- Code for the paper "Distinguishing the Knowable from the Unknowable with Language Models"☆10Updated last year
- ☆13Updated 2 years ago
- ☆55Updated 2 years ago
- ☆20Updated this week
- Documenting large text datasets 🖼️ 📚☆14Updated 10 months ago
- ☆36Updated 2 years ago
- Code for the paper "REV: Information-Theoretic Evaluation of Free-Text Rationales"☆16Updated 2 years ago
- Code repository for the paper "Mission: Impossible Language Models."☆54Updated last month
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- Landing page for MIB: A Mechanistic Interpretability Benchmark☆21Updated 2 months ago
- ☆13Updated 4 months ago
- ☆14Updated last month
- Simple and scalable tools for data-driven pretraining data selection.☆28Updated 5 months ago
- This is the official implementation for our ACL 2024 paper: "Causal Estimation of Memorisation Profiles".☆23Updated 7 months ago
- The Codebase for Causal Distillation for Language Models (NAACL '22)☆25Updated 3 years ago
- Fairer Preferences Elicit Improved Human-Aligned Large Language Model Judgments (Zhou et al., EMNLP 2024)☆13Updated last year
- CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior☆12Updated 3 years ago
- Teaching Models to Express Their Uncertainty in Words☆39Updated 3 years ago
- ACL24☆10Updated last year
- This repository contains some of the code used in the paper "Training Language Models with Langauge Feedback at Scale"☆27Updated 2 years ago
- Measuring if attention is explanation with ROAR☆22Updated 2 years ago
- Easy-to-use MIRAGE code for faithful answer attribution in RAG applications. Paper: https://aclanthology.org/2024.emnlp-main.347/☆25Updated 7 months ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆27Updated last year
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 3 years ago
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"☆28Updated 10 months ago
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆47Updated 2 years ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Updated 7 months ago
- Fast Axiomatic Attribution for Neural Networks (NeurIPS*2021)☆16Updated 2 years ago