multiplexerai / Namespace-RAGLinks
☆13Updated last year
Alternatives and similar repositories for Namespace-RAG
Users that are interested in Namespace-RAG are comparing it to the libraries listed below
Sorting:
- ☆25Updated last year
- ☆40Updated last year
- Gradio based tool to run opensource LLM models directly from Huggingface☆96Updated last year
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated last year
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Updated last year
- Experimental LLM Inference UX to aid in creative writing☆127Updated 11 months ago
- A simple experiment on letting two local LLM have a conversation about anything!☆112Updated last year
- After my server ui improvements were successfully merged, consider this repo a playground for experimenting, tinkering and hacking around…☆54Updated last year
- Local LLM inference & management server with built-in OpenAI API☆31Updated last year
- ☆50Updated last year
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- An API for VoiceCraft.☆25Updated last year
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆38Updated 2 years ago
- All the world is a play, we are but actors in it.☆50Updated 4 months ago
- Python package wrapping llama.cpp for on-device LLM inference☆94Updated last month
- Embed anything.☆27Updated last year
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated last year
- LIVA - Local Intelligent Voice Assistant☆61Updated last year
- Something similar to Apple Intelligence?☆61Updated last year
- Accepts a Hugging Face model URL, automatically downloads and quantizes it using Bits and Bytes.☆38Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- run ollama & gguf easily with a single command☆52Updated last year
- ☆16Updated 2 years ago
- Scripts to create your own moe models using mlx☆90Updated last year
- "a towel is about the most massively useful thing an interstellar AI hitchhiker can have"☆48Updated last year
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- A fast batching API to serve LLM models☆189Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- ☆17Updated 11 months ago
- Minimal, clean code implementation of RAG with mlx using gguf model weights☆53Updated last year