multiplexerai / Namespace-RAGLinks
☆13Updated last year
Alternatives and similar repositories for Namespace-RAG
Users that are interested in Namespace-RAG are comparing it to the libraries listed below
Sorting:
- ☆25Updated last year
- ☆40Updated last year
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated last year
- An OpenAI API compatible LLM inference server based on ExLlamaV2.☆25Updated last year
- Local LLM inference & management server with built-in OpenAI API☆31Updated last year
- run ollama & gguf easily with a single command☆52Updated last year
- A simple experiment on letting two local LLM have a conversation about anything!☆111Updated last year
- Experimental LLM Inference UX to aid in creative writing☆123Updated 10 months ago
- Gradio based tool to run opensource LLM models directly from Huggingface☆96Updated last year
- Local character AI chatbot with chroma vector store memory and some scripts to process documents for Chroma☆33Updated last year
- After my server ui improvements were successfully merged, consider this repo a playground for experimenting, tinkering and hacking around…☆53Updated last year
- Ollama models of NousResearch/Hermes-2-Pro-Mistral-7B-GGUF☆31Updated last year
- Embed anything.☆27Updated last year
- An API for VoiceCraft.☆25Updated last year
- GRDN.AI app for garden optimization☆70Updated last year
- Client-side toolkit for using large language models, including where self-hosted☆112Updated 10 months ago
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆122Updated 11 months ago
- ☆27Updated 5 months ago
- Dataset Crafting w/ RAG/Wikipedia ground truth and Efficient Fine-Tuning Using MLX and Unsloth. Includes configurable dataset annotation …☆186Updated last year
- Minimal, clean code implementation of RAG with mlx using gguf model weights☆52Updated last year
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated last year
- All the world is a play, we are but actors in it.☆50Updated 2 months ago
- "a towel is about the most massively useful thing an interstellar AI hitchhiker can have"☆48Updated last year
- LLaVA server (llama.cpp).☆183Updated last year
- ☆17Updated 10 months ago
- For inferring and serving local LLMs using the MLX framework☆109Updated last year
- A Python library to orchestrate LLMs in a neural network-inspired structure☆50Updated last year
- GPT-2 small trained on phi-like data☆67Updated last year
- Python package wrapping llama.cpp for on-device LLM inference☆90Updated this week
- Let's create synthetic textbooks together :)☆75Updated last year