bodaay / HuggingFaceModelDownloaderLinks
Simple go utility to download HuggingFace Models and Datasets
☆749Updated last month
Alternatives and similar repositories for HuggingFaceModelDownloader
Users that are interested in HuggingFaceModelDownloader are comparing it to the libraries listed below
Sorting:
- Web UI for ExLlamaV2☆511Updated 8 months ago
- Large-scale LLM inference engine☆1,577Updated 2 weeks ago
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆262Updated 7 months ago
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,068Updated last week
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- An extension for oobabooga/text-generation-webui that enables the LLM to search the web☆268Updated this week
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆601Updated 8 months ago
- Docker variants of oobabooga's text-generation-webui, including pre-built images.☆440Updated 3 months ago
- Dolphin System Messages☆353Updated 8 months ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆541Updated 2 weeks ago
- ☆661Updated last month
- automatically quant GGUF models☆214Updated last week
- A multimodal, function calling powered LLM webui.☆216Updated last year
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,353Updated 2 months ago
- Make abliterated models with transformers, easy and fast☆90Updated 6 months ago
- An AI assistant beyond the chat box.☆327Updated last year
- Convenience scripts to finetune (chat-)LLaMa3 and other models for any language☆316Updated last year
- Wheels for llama-cpp-python compiled with cuBLAS support☆97Updated last year
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆727Updated this week
- Self-evaluating interview for AI coders☆596Updated 4 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,903Updated 2 years ago
- LLM-powered lossless compression tool☆288Updated last year
- LLM Frontend in a single html file☆654Updated 9 months ago
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆373Updated last week
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆152Updated this week
- A fast batching API to serve LLM models☆188Updated last year
- Customizable implementation of the self-instruct paper.☆1,050Updated last year
- Efficient visual programming for AI language models☆361Updated 5 months ago
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,876Updated last year
- A pipeline parallel training script for LLMs.☆159Updated 6 months ago