huggingface / huggingface_hub
The official Python client for the Huggingface Hub.
☆2,101Updated this week
Related projects ⓘ
Alternatives and complementary repositories for huggingface_hub
- 🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools☆2,559Updated last week
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆7,926Updated this week
- Notebooks using the Hugging Face libraries 🤗☆3,652Updated last week
- Accessible large language models via k-bit quantization for PyTorch.☆6,269Updated last week
- 🤗 Evaluate: A library for easily evaluating machine learning models and datasets.☆2,029Updated last month
- 🤗 A list of wonderful open-source projects & applications integrated with Hugging Face libraries.☆893Updated 6 months ago
- Simple, safe way to store and distribute tensors☆2,874Updated this week
- A framework for few-shot evaluation of language models.☆6,918Updated this week
- Large Language Model Text Generation Inference☆9,026Updated this week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆16,366Updated this week
- Public repo for HF blog posts☆2,363Updated this week
- Train transformer language models with reinforcement learning.☆10,001Updated this week
- 🤗 AutoTrain Advanced☆3,993Updated this week
- Fast and memory-efficient exact attention☆14,135Updated this week
- Robust recipes to align language models with human and AI preferences☆4,666Updated last month
- Ongoing research training transformer models at scale☆10,515Updated this week
- 🤗 The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools☆19,240Updated last week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆8,615Updated last week
- PyTorch extensions for high performance and large scale training.☆3,188Updated 2 months ago
- Transformer related optimization, including BERT, GPT☆5,873Updated 7 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆1,891Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,036Updated 5 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,462Updated last month
- ☆2,678Updated last week
- SkyPilot: Run AI and batch jobs on any infra (Kubernetes or 12+ clouds). Get unified execution, cost savings, and high GPU availability v…☆6,757Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆1,955Updated this week
- Go ahead and axolotl questions☆7,875Updated this week
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆2,863Updated 3 months ago
- A blazing fast inference solution for text embeddings models☆2,816Updated last week