avidml / evaluating-LLMsLinks
Creating the tools and data sets necessary to evaluate vulnerabilities in LLMs.
☆27Updated 10 months ago
Alternatives and similar repositories for evaluating-LLMs
Users that are interested in evaluating-LLMs are comparing it to the libraries listed below
Sorting:
- Find and fix bugs in natural language machine learning models using adaptive testing.☆188Updated last year
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆200Updated 9 months ago
- AuditNLG: Auditing Generative AI Language Modeling for Trustworthiness☆103Updated last year
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆93Updated 2 years ago
- 🤗 Disaggregators: Curated data labelers for in-depth analysis.☆67Updated 2 years ago
- The Foundation Model Transparency Index☆85Updated last month
- This repo contains the code for generating the ToxiGen dataset, published at ACL 2022.☆345Updated last year
- Repository for research in the field of Responsible NLP at Meta.☆205Updated this week
- Fiddler Auditor is a tool to evaluate language models.☆188Updated last year
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆192Updated 6 months ago
- The official code of LM-Debugger, an interactive tool for inspection and intervention in transformer-based language models.☆182Updated 3 years ago
- 💬 Language Identification with Support for More Than 2000 Labels -- EMNLP 2023☆186Updated 2 months ago
- Annotated corpus + evaluation metrics for text anonymisation☆70Updated 2 weeks ago
- Contains all assets to run with Moonshot Library (Connectors, Datasets and Metrics)☆39Updated 2 weeks ago
- Code for Multilingual Eval of Generative AI paper published at EMNLP 2023☆72Updated last year
- ☆228Updated 4 years ago
- 🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the world's largest catalog of tools and data …☆212Updated 2 weeks ago
- [EMNLP 2023 Demo] fabricator - annotating and generating datasets with large language models.☆111Updated last year
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆117Updated this week
- A python package for benchmarking interpretability techniques on Transformers.☆215Updated last year
- Script for downloading GitHub.☆98Updated last year
- ☆153Updated 3 years ago
- codebase release for EMNLP2023 paper publication☆19Updated 4 months ago
- Notebooks for training universal 0-shot classifiers on many different tasks☆139Updated last year
- Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central …☆48Updated last year
- The world's largest social media toxicity dataset.☆189Updated 3 years ago
- AI Data Management & Evaluation Platform☆215Updated 2 years ago
- A framework for few-shot evaluation of autoregressive language models.☆106Updated 2 years ago
- Command Line Interface for Hugging Face Inference Endpoints☆65Updated last year
- RATransformers 🐭- Make your transformer (like BERT, RoBERTa, GPT-2 and T5) Relation Aware!☆42Updated 3 years ago