huggingface / huggingface_hubLinks
The official Python client for the Huggingface Hub.
β2,882Updated this week
Alternatives and similar repositories for huggingface_hub
Users that are interested in huggingface_hub are comparing it to the libraries listed below
Sorting:
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β3,066Updated this week
- π€ Evaluate: A library for easily evaluating machine learning models and datasets.β2,311Updated 3 weeks ago
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β9,086Updated last week
- Simple, safe way to store and distribute tensorsβ3,425Updated 2 weeks ago
- π€ A list of wonderful open-source projects & applications integrated with Hugging Face libraries.β992Updated last year
- Notebooks using the Hugging Face libraries π€β4,291Updated last week
- β2,872Updated this week
- Public repo for HF blog postsβ3,100Updated this week
- Accessible large language models via k-bit quantization for PyTorch.β7,533Updated this week
- Unofficial Hugging Face client with a dash of personalityβ12Updated 10 months ago
- Large Language Model Text Generation Inferenceβ10,477Updated this week
- π€ The largest hub of ready-to-use datasets for AI models with fast, easy-to-use and efficient data manipulation toolsβ20,583Updated last week
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β19,447Updated last week
- Build and share delightful machine learning apps, all in Python. π Star to support our work!β39,719Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.β4,938Updated 4 months ago
- A Unified Library for Parameter-Efficient and Modular Transfer Learningβ2,760Updated 3 weeks ago
- Use Hugging Face with JavaScriptβ2,212Updated last week
- QLoRA: Efficient Finetuning of Quantized LLMsβ10,648Updated last year
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"β12,627Updated 8 months ago
- π€ AutoTrain Advancedβ4,480Updated 7 months ago
- π₯ Fast State-of-the-Art Tokenizers optimized for Research and Productionβ10,033Updated this week
- Hackable and optimized Transformers building blocks, supporting a composable construction.β9,900Updated last week
- Ongoing research training transformer models at scaleβ13,458Updated this week
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generationβ5,462Updated last year
- Python bindings for the Transformer models implemented in C/C++ using GGML library.β1,875Updated last year
- Efficient few-shot learning with Sentence Transformersβ2,559Updated last month
- Pretrained model hub for Keras 3.β925Updated last week
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language modelsβ3,111Updated last year
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:β2,235Updated 3 months ago
- ModelScope: bring the notion of Model-as-a-Service to life.β8,293Updated last week