bodaay / HuggingFaceModelDownloader
Simple go utility to download HuggingFace Models and Datasets
☆657Updated 6 months ago
Alternatives and similar repositories for HuggingFaceModelDownloader:
Users that are interested in HuggingFaceModelDownloader are comparing it to the libraries listed below
- An OAI compatible exllamav2 API that's both lightweight and fast☆920Updated this week
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆552Updated 2 months ago
- Web UI for ExLlamaV2☆493Updated 2 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆152Updated 11 months ago
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆251Updated last month
- Large-scale LLM inference engine☆1,395Updated this week
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆578Updated this week
- An AI assistant beyond the chat box.☆326Updated last year
- Convenience scripts to finetune (chat-)LLaMa3 and other models for any language☆305Updated 10 months ago
- This repo contains the source code for RULER: What’s the Real Context Size of Your Long-Context Language Models?☆1,066Updated 2 months ago
- LLM Frontend in a single html file☆450Updated 3 months ago
- This project demonstrates a basic chain-of-thought interaction with any LLM (Large Language Model)☆318Updated 7 months ago
- A proxy server for multiple ollama instances with Key security☆409Updated 2 weeks ago
- Model swapping for llama.cpp (or any local OpenAPI compatible server)☆574Updated this week
- ☆855Updated 7 months ago
- A multimodal, function calling powered LLM webui.☆214Updated 7 months ago
- Self-evaluating interview for AI coders☆579Updated this week
- Efficient visual programming for AI language models☆356Updated 7 months ago
- ☆456Updated 2 weeks ago
- ☆622Updated last week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆305Updated this week
- ☆713Updated last month
- automatically quant GGUF models☆168Updated this week
- a self-hosted webui for 30+ generative ai☆575Updated this week
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,859Updated last year
- Your Trusty Memory-enabled AI Companion - Simple RAG chatbot optimized for local LLMs | 12 Languages Supported | OpenAI API Compatible☆311Updated last month
- A fast batching API to serve LLM models☆182Updated last year
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,509Updated last month
- An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.☆580Updated 5 months ago
- A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI…☆598Updated last year