leafspark / AutoGGUF
automatically quant GGUF models
☆140Updated this week
Related projects ⓘ
Alternatives and complementary repositories for AutoGGUF
- run ollama & gguf easily with a single command☆48Updated 6 months ago
- ☆128Updated this week
- idea: https://github.com/nyxkrage/ebook-groupchat/☆82Updated 3 months ago
- Gradio based tool to run opensource LLM models directly from Huggingface☆87Updated 4 months ago
- A pipeline parallel training script for LLMs.☆83Updated this week
- Easily view and modify JSON datasets for large language models☆62Updated last month
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆110Updated 5 months ago
- A python package for developing AI applications with local LLMs.☆140Updated 4 months ago
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆202Updated last month
- All the world is a play, we are but actors in it.☆47Updated 4 months ago
- A fast batching API to serve LLM models☆172Updated 6 months ago
- ☆104Updated 8 months ago
- A multimodal, function calling powered LLM webui.☆208Updated last month
- ☆112Updated this week
- Efficient visual programming for AI language models☆299Updated 2 months ago
- ☆149Updated 4 months ago
- Low-Rank adapter extraction for fine-tuned transformers model☆162Updated 6 months ago
- Something similar to Apple Intelligence?☆57Updated 4 months ago
- A stock market bot that automatically, once a day, rebalances your Robinhood portfolio by gathering information about each ticker in the …☆34Updated 3 weeks ago
- Scripts to create your own moe models using mlx☆86Updated 8 months ago
- A python application that routes incoming prompts to an LLM by category, and can support a single incoming connection from a front end to…☆167Updated this week
- This is the Mixture-of-Agents (MoA) concept, adapted from the original work by TogetherAI. My version is tailored for local model usage a…☆106Updated 4 months ago
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆55Updated last week
- After my server ui improvements were successfully merged, consider this repo a playground for experimenting, tinkering and hacking around…☆56Updated 3 months ago
- Large Model Proxy is designed to make it easy to run multiple resource-heavy Large Models (LM) on the same machine with limited amount of…☆47Updated last month
- HTTP proxy for on-demand model loading with llama.cpp (or other OpenAI compatible backends)☆41Updated this week
- ☆94Updated 2 months ago
- AnyModal is a Flexible Multimodal Language Model Framework☆40Updated this week
- Experimental LLM Inference UX to aid in creative writing☆106Updated 4 months ago
- cli tool to quantize gguf, gptq, awq, hqq and exl2 models☆64Updated last month