zouharvi / ryanize-bib
Highlight errors in a bib file: missing URLs, capitalization protection, etc
☆26Updated 8 months ago
Alternatives and similar repositories for ryanize-bib:
Users that are interested in ryanize-bib are comparing it to the libraries listed below
- Code for preprint: Summarizing Differences between Text Distributions with Natural Language☆42Updated last year
- Code for paper "Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs"☆28Updated 2 years ago
- Code for the paper "Implicit Representations of Meaning in Neural Language Models"☆50Updated last year
- Code repository for the paper "Mission: Impossible Language Models."☆42Updated 2 weeks ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆37Updated 2 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆56Updated 2 years ago
- Tasks for describing differences between text distributions.☆16Updated 5 months ago
- ☆19Updated last year
- Materials for EACL2024 tutorial: Transformer-specific Interpretability☆44Updated 10 months ago
- ☆42Updated last year
- ☆31Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆65Updated 10 months ago
- Few-shot Learning with Auxiliary Data☆26Updated last year
- CausalGym: Benchmarking causal interpretability methods on linguistic tasks☆40Updated last month
- ☆28Updated 7 months ago
- ☆34Updated last year
- A library for efficient patching and automatic circuit discovery.☆48Updated 2 months ago
- ☆75Updated 5 months ago
- ☆58Updated 2 years ago
- Materials for "Prompting is not a substitute for probability measurements in large language models" (EMNLP 2023)☆19Updated last year
- Benchmark API for Multidomain Language Modeling☆24Updated 2 years ago
- The LM Contamination Index is a manually created database of contamination evidences for LMs.☆76Updated 9 months ago
- ☆35Updated 8 months ago
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆89Updated 3 years ago
- Data for evaluating gender bias in coreference resolution systems.☆72Updated 5 years ago
- Teaching Models to Express Their Uncertainty in Words☆36Updated 2 years ago
- Dataset + classifier tools to study social perception biases in natural language generation☆67Updated last year
- Code of NAACL 2022 "Efficient Hierarchical Domain Adaptation for Pretrained Language Models" paper.☆32Updated last year
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆79Updated last year
- ☆26Updated last month