victorcarre6 / llm-memorizationLinks
Give your local LLM a real memory with a lightweight, fully local memory system. 100% offline and under your control.
☆66Updated 4 months ago
Alternatives and similar repositories for llm-memorization
Users that are interested in llm-memorization are comparing it to the libraries listed below
Sorting:
- Explore the unknown, build the future, own your data.☆234Updated this week
- A web application that converts speech to speech 100% private☆82Updated 8 months ago
- Use smol agents to do research and then update csv coumns with its findings.☆41Updated last year
- Cognito: Supercharge your Chrome browser with AI. Guide, query, and control everything using natural language.☆57Updated 3 weeks ago
- synthetic dataset generation workflow using local file resources for finetuning llms.☆82Updated 3 months ago
- ☆178Updated 5 months ago
- An fully autonomous agent that accesses the browser and performs tasks.☆17Updated 9 months ago
- The PyVisionAI Official Repo☆111Updated 6 months ago
- Generate a wiki for your research topic, sourcing from the web and your docs.☆57Updated 10 months ago
- High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model di…☆133Updated this week
- A cross platform App that gives you the best UX to run models locally or remotely on your own hardware☆72Updated last month
- Fast local speech-to-text for any app using faster-whisper☆146Updated 4 months ago
- Personal voice assistant, with voice interruption and Twilio support☆18Updated 11 months ago
- Dashboard v5 Coming Soon!!☆63Updated last month
- OLLama IMage CAtegorizer☆70Updated last year
- ☆205Updated 4 months ago
- Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory and recall to any model in minutes.☆252Updated 2 weeks ago
- One library to split them all: Sentence, Code, Docs. Chunk smarter, not harder — built for LLMs, RAG pipelines, and beyond.☆57Updated this week
- The most feature-complete local AI workstation. Multi-GPU inference, integrated Stable Diffusion + ADetailer, voice cloning, research-gra…☆52Updated last week
- Retrieval-augmented generation (RAG) for remote & local LLM use☆44Updated 8 months ago
- ☆19Updated 6 months ago
- A sleek web interface for Ollama, making local LLM management and usage simple. WebOllama provides an intuitive UI to manage Ollama model…☆65Updated 3 months ago
- Local AI voice assistant stack for Home Assistant (GPU-accelerated) with persistent memory, follow-up conversation, and Ollama model reco…☆227Updated 6 months ago
- the composable multi-agent shell☆199Updated this week
- Multi-agent autonomous research system using LangGraph and LangChain. Generates citation-backed reports with credibility scoring and web …☆124Updated last month
- 🚀 FlexLLama - Lightweight self-hosted tool for running multiple llama.cpp server instances with OpenAI v1 API compatibility and multi-GP…☆48Updated 2 months ago
- Local & Private LLM that drafts responses LIKE you automatically☆84Updated last year
- ☆58Updated 11 months ago
- ☆64Updated 7 months ago
- Python language chat with Ollama models locally, anthropic and openai☆24Updated 9 months ago