avidml / evaluating-LLMsLinks
Creating the tools and data sets necessary to evaluate vulnerabilities in LLMs.
☆24Updated 4 months ago
Alternatives and similar repositories for evaluating-LLMs
Users that are interested in evaluating-LLMs are comparing it to the libraries listed below
Sorting:
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆188Updated 3 months ago
- Find and fix bugs in natural language machine learning models using adaptive testing.☆184Updated last year
- Annotated corpus + evaluation metrics for text anonymisation☆59Updated last year
- A python package for benchmarking interpretability techniques on Transformers.☆213Updated 9 months ago
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆94Updated last year
- Repository for research in the field of Responsible NLP at Meta.☆201Updated 2 months ago
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆184Updated last week
- 🤗 Disaggregators: Curated data labelers for in-depth analysis.☆66Updated 2 years ago
- 💬 Language Identification with Support for More Than 2000 Labels -- EMNLP 2023☆144Updated last month
- [EMNLP 2023 Demo] fabricator - annotating and generating datasets with large language models.☆108Updated last year
- AuditNLG: Auditing Generative AI Language Modeling for Trustworthiness☆101Updated 5 months ago
- Notebooks for training universal 0-shot classifiers on many different tasks☆131Updated 6 months ago
- A library to synthesize text datasets using Large Language Models (LLM)☆152Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆105Updated 2 years ago
- The Foundation Model Transparency Index☆82Updated last year
- ☆13Updated 2 years ago
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.☆177Updated 3 years ago
- ☆215Updated 4 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆178Updated last year
- Efficient Attention for Long Sequence Processing☆95Updated last year
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆131Updated last year
- Benchmarking Large Language Models☆99Updated 3 weeks ago
- codebase release for EMNLP2023 paper publication☆19Updated 2 months ago
- FastFit ⚡ When LLMs are Unfit Use FastFit ⚡ Fast and Effective Text Classification with Many Classes☆208Updated 2 months ago
- Code for Multilingual Eval of Generative AI paper published at EMNLP 2023☆70Updated last year
- Fiddler Auditor is a tool to evaluate language models.☆184Updated last year
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated last year
- Dataset from the paper "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering" (COLING 2022)☆114Updated 2 years ago
- Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled corpus and yields massive improvement: "GPL: …☆336Updated 2 years ago
- ☆102Updated 7 months ago