huggingface / evaluate
π€ Evaluate: A library for easily evaluating machine learning models and datasets.
β1,965Updated this week
Related projects: β
- π Accelerate training and inference of π€ Transformers and π€ Diffusers with easy to use hardware optimization toolsβ2,459Updated this week
- Efficient few-shot learning with Sentence Transformersβ2,138Updated last week
- β1,456Updated 3 weeks ago
- A Unified Library for Parameter-Efficient and Modular Transfer Learningβ2,525Updated 3 weeks ago
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,442Updated 8 months ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language modelsβ2,815Updated 2 months ago
- Cramming the training of a (BERT-type) language model into limited compute.β1,284Updated 3 months ago
- β2,635Updated last week
- PyTorch extensions for high performance and large scale training.β3,149Updated 2 weeks ago
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β7,687Updated this week
- Accessible large language models via k-bit quantization for PyTorch.β6,029Updated this week
- A modular RL library to fine-tune language models to human preferencesβ2,173Updated 6 months ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models πβ1,643Updated 10 months ago
- General technology for enabling AI capabilities w/ LLMs and MLLMsβ3,561Updated this week
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.β972Updated last month
- The hub for EleutherAI's work on interpretability and learning dynamicsβ2,210Updated 3 weeks ago
- Toolkit for creating, sharing and using natural language prompts.β2,644Updated 10 months ago
- The implementation of DeBERTaβ1,963Updated 11 months ago
- A Heterogeneous Benchmark for Information Retrieval. Easy to use, evaluate your models across 15+ diverse IR datasets.β1,552Updated last month
- Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09β¦β1,857Updated this week
- Foundation Architecture for (M)LLMsβ3,003Updated 5 months ago
- β1,469Updated last year
- Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining theβ¦β1,965Updated last month
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"β1,561Updated last year
- A framework for few-shot evaluation of language models.β6,426Updated this week
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,519Updated 7 months ago
- maximal update parametrization (Β΅P)β1,334Updated 2 months ago
- Pyserini is a Python toolkit for reproducible information retrieval research with sparse and dense representations.β1,633Updated this week
- β1,099Updated last month
- SGPT: GPT Sentence Embeddings for Semantic Searchβ838Updated 7 months ago