avidml / evaluating-LLMsLinks
Creating the tools and data sets necessary to evaluate vulnerabilities in LLMs.
☆25Updated 5 months ago
Alternatives and similar repositories for evaluating-LLMs
Users that are interested in evaluating-LLMs are comparing it to the libraries listed below
Sorting:
- Find and fix bugs in natural language machine learning models using adaptive testing.☆185Updated last year
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆191Updated 4 months ago
- AI Data Management & Evaluation Platform☆216Updated last year
- Repository for research in the field of Responsible NLP at Meta.☆202Updated 3 months ago
- 🤗 Disaggregators: Curated data labelers for in-depth analysis.☆65Updated 2 years ago
- AuditNLG: Auditing Generative AI Language Modeling for Trustworthiness☆102Updated 7 months ago
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆93Updated last year
- This repo contains the code for generating the ToxiGen dataset, published at ACL 2022.☆330Updated last year
- ☆217Updated 4 years ago
- A framework for few-shot evaluation of autoregressive language models.☆105Updated 2 years ago
- Notebooks for training universal 0-shot classifiers on many different tasks☆136Updated 8 months ago
- Pipeline for pulling and processing online language model pretraining data from the web☆177Updated 2 years ago
- Fiddler Auditor is a tool to evaluate language models.☆187Updated last year
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.☆178Updated 3 years ago
- Annotated corpus + evaluation metrics for text anonymisation☆61Updated last month
- [EMNLP 2023 Demo] fabricator - annotating and generating datasets with large language models.☆110Updated last year
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆105Updated this week
- Topic modeling helpers using managed language models from Cohere. Name text clusters using large GPT models.☆223Updated 2 years ago
- A python package for benchmarking interpretability techniques on Transformers.☆214Updated 11 months ago
- Experiments on including metadata such as URLs, timestamps, website descriptions and HTML tags during pretraining.☆31Updated 2 years ago
- 💬 Language Identification with Support for More Than 2000 Labels -- EMNLP 2023☆149Updated 2 months ago
- The Foundation Model Transparency Index☆82Updated last year
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆186Updated last month
- FastFit ⚡ When LLMs are Unfit Use FastFit ⚡ Fast and Effective Text Classification with Many Classes☆211Updated 3 months ago
- triple-encoders is a library for contextualizing distributed Sentence Transformers representations.☆14Updated 11 months ago
- Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central …☆47Updated last year
- RATransformers 🐭- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!☆41Updated 2 years ago
- ☆66Updated 2 years ago
- Deliver safe & effective language models☆535Updated last week
- Code for Multilingual Eval of Generative AI paper published at EMNLP 2023☆70Updated last year