AI4Bharat / FBILinks
FBI: Finding Blindspots in LLM Evaluations with Interpretable Checklists
☆31Updated 4 months ago
Alternatives and similar repositories for FBI
Users that are interested in FBI are comparing it to the libraries listed below
Sorting:
- A toolkit implementing advanced methods to transfer models and model knowledge across tokenizers.☆59Updated 5 months ago
- Code for the paper "Fishing for Magikarp"☆177Updated 7 months ago
- ☆59Updated last year
- Code for Zero-Shot Tokenizer Transfer☆142Updated 11 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆93Updated last year
- Efficiently computing & storing token n-grams from large corpora☆26Updated last year
- A package dedicated for running benchmark agreement testing☆18Updated 3 months ago
- 🔍 Multilingual Evaluation of English-Centric LLMs via Cross-Lingual Alignment☆11Updated 8 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- ☆65Updated 2 years ago
- ☆38Updated last year
- ☆23Updated 2 weeks ago
- ☆44Updated last year
- Synthetic Data Generation for Evaluation☆13Updated 10 months ago
- https://footprints.baulab.info☆17Updated last year
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆189Updated 5 months ago
- This repo contains code for the paper "Psychologically-informed chain-of-thought prompts for metaphor understanding in large language mod…☆14Updated 2 years ago
- Minimum Bayes Risk Decoding for Hugging Face Transformers☆60Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 2 months ago
- ☆90Updated last week
- Functional Benchmarks and the Reasoning Gap☆90Updated last year
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆102Updated last year
- ☆42Updated last year
- Resources for cultural NLP research☆113Updated 3 months ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated 2 years ago
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.☆180Updated 3 years ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆62Updated last year
- Supercharge huggingface transformers with model parallelism.☆77Updated 5 months ago
- Improving Text Embedding of Language Models Using Contrastive Fine-tuning☆66Updated last year
- IndicGenBench is a high-quality, multilingual, multi-way parallel benchmark for evaluating Large Language Models (LLMs) on 4 user-facing …☆56Updated last year