alisawuffles / tokenizer-attackLinks
Official implementation of "Data Mixture Inference: What do BPE tokenizers reveal about their training data?"
☆18Updated 7 months ago
Alternatives and similar repositories for tokenizer-attack
Users that are interested in tokenizer-attack are comparing it to the libraries listed below
Sorting:
- This is the repo for constructing a comprehensive and rigorous evaluation framework for LLM calibration.☆13Updated last year
- https://footprints.baulab.info☆17Updated last year
- ☆20Updated 2 months ago
- Code repository for the paper "Mission: Impossible Language Models."☆56Updated 3 months ago
- Code for the paper "Distinguishing the Knowable from the Unknowable with Language Models"☆10Updated last year
- Documenting large text datasets 🖼️ 📚☆14Updated last year
- Official Repository for Dataset Inference for LLMs☆43Updated last year
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆57Updated 2 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆32Updated 11 months ago
- The Codebase for Causal Distillation for Language Models (NAACL '22)☆26Updated 3 years ago
- Finding semantically meaningful and accurate prompts.☆48Updated 2 years ago
- ☆51Updated 2 years ago
- ☆36Updated 2 years ago
- Code for the paper "REV: Information-Theoretic Evaluation of Free-Text Rationales"☆16Updated 2 years ago
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"☆29Updated last year
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Updated 9 months ago
- ☆13Updated 6 months ago
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"☆42Updated last year
- ACL24☆11Updated last year
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆28Updated last year
- This is the official implementation for our ACL 2024 paper: "Causal Estimation of Memorisation Profiles".☆24Updated 9 months ago
- Fairer Preferences Elicit Improved Human-Aligned Large Language Model Judgments (Zhou et al., EMNLP 2024)☆14Updated last year
- Code to reproduce key results accompanying "SAEs (usually) Transfer Between Base and Chat Models"☆13Updated last year
- Measuring if attention is explanation with ROAR☆22Updated 2 years ago
- ☆56Updated 2 years ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆161Updated 6 months ago
- Interpretating the latent space representations of attention head outputs for LLMs☆36Updated last year
- Code Release for the 2023 NeurIPS Paper How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained langua…☆17Updated last year
- ☆16Updated last year