alisawuffles / tokenizer-attackLinks
Official implementation of "Data Mixture Inference: What do BPE tokenizers reveal about their training data?"
☆14Updated 3 months ago
Alternatives and similar repositories for tokenizer-attack
Users that are interested in tokenizer-attack are comparing it to the libraries listed below
Sorting:
- This is the repo for constructing a comprehensive and rigorous evaluation framework for LLM calibration.☆13Updated last year
- ☆20Updated last year
- https://footprints.baulab.info☆17Updated 10 months ago
- Interpretating the latent space representations of attention head outputs for LLMs☆34Updated last year
- MergeBench: A Benchmark for Merging Domain-Specialized LLMs☆20Updated 3 months ago
- ☆13Updated 2 years ago
- Augmenting Statistical Models with Natural Language Parameters☆27Updated 11 months ago
- Teaching Models to Express Their Uncertainty in Words☆39Updated 3 years ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.☆53Updated 10 months ago
- ☆13Updated 2 months ago
- CausalGym: Benchmarking causal interpretability methods on linguistic tasks☆46Updated 9 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆124Updated 2 months ago
- Code Release for the 2023 NeurIPS Paper How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained langua…☆13Updated 8 months ago
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- The Codebase for Causal Distillation for Language Models (NAACL '22)☆25Updated 3 years ago
- ☆53Updated 2 years ago
- Code for the paper "Spectral Editing of Activations for Large Language Model Alignments"☆26Updated 8 months ago
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"☆39Updated last year
- ☆28Updated 6 months ago
- Code repository for the paper "Mission: Impossible Language Models."☆53Updated 4 months ago
- Easy-to-use MIRAGE code for faithful answer attribution in RAG applications. Paper: https://aclanthology.org/2024.emnlp-main.347/☆25Updated 5 months ago
- ☆19Updated last year
- Fairer Preferences Elicit Improved Human-Aligned Large Language Model Judgments (Zhou et al., EMNLP 2024)☆13Updated 11 months ago
- ☆36Updated 2 years ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.☆27Updated last year
- Landing page for MIB: A Mechanistic Interpretability Benchmark☆19Updated 2 weeks ago
- Measuring if attention is explanation with ROAR☆22Updated 2 years ago
- Finding semantically meaningful and accurate prompts.☆47Updated last year
- Simple and scalable tools for data-driven pretraining data selection.☆25Updated 2 months ago
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆34Updated 2 years ago