huggingface / evaluateLinks
π€ Evaluate: A library for easily evaluating machine learning models and datasets.
β2,410Updated 2 weeks ago
Alternatives and similar repositories for evaluate
Users that are interested in evaluate are comparing it to the libraries listed below
Sorting:
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β3,277Updated 3 weeks ago
- Toolkit for creating, sharing and using natural language prompts.β2,997Updated 2 years ago
- β1,560Updated 2 weeks ago
- Efficient few-shot learning with Sentence Transformersβ2,676Updated last month
- A Unified Library for Parameter-Efficient and Modular Transfer Learningβ2,802Updated 3 months ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language modelsβ3,195Updated last year
- β1,252Updated last year
- Model explainability that works seamlessly with π€ transformers. Explain your transformers model in just 2 lines of code.β1,407Updated 2 years ago
- PyTorch extensions for high performance and large scale training.β3,397Updated 9 months ago
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models β¦β2,662Updated this week
- A modular RL library to fine-tune language models to human preferencesβ2,376Updated last year
- General technology for enabling AI capabilities w/ LLMs and MLLMsβ4,277Updated last month
- β2,945Updated 3 weeks ago
- The hub for EleutherAI's work on interpretability and learning dynamicsβ2,725Updated 2 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,741Updated 2 years ago
- Accessible large language models via k-bit quantization for PyTorch.β7,931Updated 2 weeks ago
- MTEB: Massive Text Embedding Benchmarkβ3,106Updated this week
- Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.β2,013Updated this week
- The implementation of DeBERTaβ2,189Updated 2 years ago
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β9,477Updated last week
- A Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.β2,064Updated 3 months ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models πβ1,688Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.β2,092Updated 7 months ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.β1,008Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.β2,877Updated this week
- β1,634Updated 2 years ago
- Measuring Massive Multitask Language Understanding | ICLR 2021β1,546Updated 2 years ago
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddingsβ2,021Updated last year
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"β1,811Updated 7 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backendsβ2,291Updated 2 weeks ago