bodaay / HuggingFaceModelDownloaderLinks
Simple go utility to download HuggingFace Models and Datasets
☆734Updated last week
Alternatives and similar repositories for HuggingFaceModelDownloader
Users that are interested in HuggingFaceModelDownloader are comparing it to the libraries listed below
Sorting:
- Web UI for ExLlamaV2☆513Updated 7 months ago
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,047Updated 2 weeks ago
- Large-scale LLM inference engine☆1,543Updated this week
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆259Updated 6 months ago
- Self-evaluating interview for AI coders☆597Updated 2 months ago
- An extension for oobabooga/text-generation-webui that enables the LLM to search the web☆267Updated this week
- automatically quant GGUF models☆199Updated last week
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆164Updated last year
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆491Updated last week
- Dolphin System Messages☆346Updated 6 months ago
- A multimodal, function calling powered LLM webui.☆216Updated 11 months ago
- ☆657Updated 3 weeks ago
- LLM-powered lossless compression tool☆288Updated last year
- An AI assistant beyond the chat box.☆328Updated last year
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆589Updated 6 months ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,309Updated 3 weeks ago
- The RunPod worker template for serving our large language model endpoints. Powered by vLLM.☆365Updated last week
- Convenience scripts to finetune (chat-)LLaMa3 and other models for any language☆315Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,898Updated last year
- Use Code Llama with Visual Studio Code and the Continue extension. A local LLM alternative to GitHub Copilot.☆567Updated last year
- Simple Python library/structure to ablate features in LLMs which are supported by TransformerLens☆506Updated last year
- A pipeline parallel training script for LLMs.☆158Updated 4 months ago
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆696Updated this week
- Make abliterated models with transformers, easy and fast☆86Updated 4 months ago
- A fast batching API to serve LLM models☆187Updated last year
- LLM Frontend in a single html file☆643Updated 7 months ago
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- A simple converter which converts pytorch bin files to safetensor, intended to be used for LLM conversion.☆71Updated last year
- Download models from the Ollama library, without Ollama☆97Updated 10 months ago
- A proxy server for multiple ollama instances with Key security☆489Updated last week