huggingface / evaluate
π€ Evaluate: A library for easily evaluating machine learning models and datasets.
β2,182Updated 3 months ago
Alternatives and similar repositories for evaluate:
Users that are interested in evaluate are comparing it to the libraries listed below
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β2,845Updated this week
- A Unified Library for Parameter-Efficient and Modular Transfer Learningβ2,683Updated this week
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β8,626Updated this week
- β1,511Updated this week
- β1,205Updated 8 months ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language modelsβ3,023Updated 9 months ago
- Efficient few-shot learning with Sentence Transformersβ2,447Updated last week
- Accessible large language models via k-bit quantization for PyTorch.β6,932Updated this week
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,621Updated last year
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models β¦β2,167Updated this week
- General technology for enabling AI capabilities w/ LLMs and MLLMsβ3,930Updated this week
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.β992Updated 8 months ago
- PyTorch extensions for high performance and large scale training.β3,298Updated last week
- Model explainability that works seamlessly with π€ transformers. Explain your transformers model in just 2 lines of code.β1,335Updated last year
- Toolkit for creating, sharing and using natural language prompts.β2,820Updated last year
- A modular RL library to fine-tune language models to human preferencesβ2,298Updated last year
- β2,787Updated this week
- A Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.β1,776Updated last month
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.β2,002Updated 3 weeks ago
- Robust recipes to align language models with human and AI preferencesβ5,130Updated 4 months ago
- MTEB: Massive Text Embedding Benchmarkβ2,409Updated this week
- Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models πβ1,684Updated 5 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backendsβ1,414Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.β2,354Updated this week
- Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining theβ¦β2,026Updated 8 months ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,565Updated last year
- The implementation of DeBERTaβ2,072Updated last year
- Organize your experiments into discrete steps that can be cached and reused throughout the lifetime of your research project.β552Updated 10 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"β1,719Updated last year
- Original Implementation of Prompt Tuning from Lester, et al, 2021β676Updated last month