leafspark / AutoGGUFLinks
automatically quant GGUF models
☆184Updated last week
Alternatives and similar repositories for AutoGGUF
Users that are interested in AutoGGUF are comparing it to the libraries listed below
Sorting:
- A pipeline parallel training script for LLMs.☆149Updated last month
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆255Updated 3 months ago
- run ollama & gguf easily with a single command☆51Updated last year
- A multimodal, function calling powered LLM webui.☆214Updated 9 months ago
- This is the Mixture-of-Agents (MoA) concept, adapted from the original work by TogetherAI. My version is tailored for local model usage a…☆116Updated 11 months ago
- ☆77Updated this week
- Easily view and modify JSON datasets for large language models☆76Updated last month
- ☆94Updated 6 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆105Updated last year
- ☆114Updated 7 months ago
- ☆130Updated last month
- Gradio based tool to run opensource LLM models directly from Huggingface☆93Updated 11 months ago
- ☆114Updated 6 months ago
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆115Updated last year
- ☆203Updated last month
- After my server ui improvements were successfully merged, consider this repo a playground for experimenting, tinkering and hacking around…☆54Updated 10 months ago
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated 9 months ago
- A fast batching API to serve LLM models☆183Updated last year
- This extension enhances the capabilities of textgen-webui by integrating advanced vision models, allowing users to have contextualized co…☆54Updated 8 months ago
- Experimental LLM Inference UX to aid in creative writing☆114Updated 6 months ago
- Open source LLM UI, compatible with all local LLM providers.☆174Updated 9 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆67Updated this week
- All the world is a play, we are but actors in it.☆50Updated this week
- 1.58-bit LLaMa model☆81Updated last year
- ☆49Updated 4 months ago
- ☆129Updated last month
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- Efficient visual programming for AI language models☆363Updated last month
- LLM inference in C/C++☆77Updated last month
- A python package for developing AI applications with local LLMs.☆150Updated 5 months ago