huggingface / evaluate
π€ Evaluate: A library for easily evaluating machine learning models and datasets.
β2,082Updated last week
Alternatives and similar repositories for evaluate:
Users that are interested in evaluate are comparing it to the libraries listed below
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β2,667Updated this week
- A Unified Library for Parameter-Efficient and Modular Transfer Learningβ2,631Updated last week
- The implementation of DeBERTaβ2,026Updated last year
- A modular RL library to fine-tune language models to human preferencesβ2,253Updated 10 months ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,567Updated last year
- β2,723Updated this week
- Efficient few-shot learning with Sentence Transformersβ2,311Updated this week
- β1,486Updated 2 months ago
- PyTorch extensions for high performance and large scale training.β3,232Updated this week
- Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models πβ1,671Updated 2 months ago
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β8,178Updated this week
- Accessible large language models via k-bit quantization for PyTorch.β6,522Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β1,354Updated 9 months ago
- Toolkit for creating, sharing and using natural language prompts.β2,740Updated last year
- Model explainability that works seamlessly with π€ transformers. Explain your transformers model in just 2 lines of code.β1,316Updated last year
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language modelsβ2,928Updated 5 months ago
- General technology for enabling AI capabilities w/ LLMs and MLLMsβ3,797Updated last week
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"β1,667Updated last year
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.β982Updated 5 months ago
- Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09β¦β2,015Updated this week
- The hub for EleutherAI's work on interpretability and learning dynamicsβ2,339Updated last month
- Foundation Architecture for (M)LLMsβ3,038Updated 9 months ago
- Transformer related optimization, including BERT, GPTβ5,981Updated 9 months ago
- β1,157Updated 5 months ago
- Cramming the training of a (BERT-type) language model into limited compute.β1,307Updated 7 months ago
- Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining theβ¦β1,996Updated 5 months ago
- β1,523Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.β2,147Updated last week
- MTEB: Massive Text Embedding Benchmarkβ2,086Updated this week