juliooa / secondbrain
Multi-platform desktop app to download and run Large Language Models(LLM) locally in your computer.
☆273Updated last year
Alternatives and similar repositories for secondbrain:
Users that are interested in secondbrain are comparing it to the libraries listed below
- OpenAI compatible API for serving LLAMA-2 model☆215Updated last year
- 🎒 local.ai - Run AI locally on your PC!☆661Updated last year
- Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.☆56Updated last year
- An open-source VSCode extension, the AI coding assistant, integrates with Ollama, HuggingFace, OpenAI, and Anthropic.☆174Updated this week
- From anywhere you can type, query and stream the output of an LLM or any other script☆482Updated 9 months ago
- Use local llama LLM or openai to chat, discuss/summarize your documents, youtube videos, and so on.☆152Updated last month
- Octogen is an Open-Source Code Interpreter Agent Framework☆256Updated 5 months ago
- Open source alternative to Perplexity AI with ability to run locally☆178Updated 3 months ago
- An Autonomous LLM Agent that runs on Wizcoder-15B☆338Updated 3 months ago
- LLM Orchestrator built in Rust☆267Updated 10 months ago
- Edge full-stack LLM platform. Written in Rust☆375Updated 8 months ago
- All-in-one desktop app for running LLMs locally.☆437Updated 2 months ago
- BabyAGI to run with GPT4All☆249Updated last year
- Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.☆379Updated 10 months ago
- 🦀 A curated list of Rust tools, libraries, and frameworks for working with LLMs, GPT, AI☆332Updated 10 months ago
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes☆165Updated this week
- LLM fine-tuning and eval☆344Updated 10 months ago
- A simple LLM chat front-end that makes it easy to find, download, and mess around with models on your local machine.☆65Updated last year
- ⚡ Edgen: Local, private GenAI server alternative to OpenAI. No GPU required. Run AI models locally: LLMs (Llama2, Mistral, Mixtral...), …☆349Updated 8 months ago
- Prem App it's your personal AI in your pocket☆463Updated last year
- ChatGPT UI that is keyboard-centric, mobile friendly, and searchable.☆179Updated 3 weeks ago
- Native gui to serveral AI services plus llama.cpp local AIs.☆108Updated last year
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated last year
- Use your own AI models on the web☆906Updated 5 months ago
- ✨ If Posthog built Zapier. Feature complete, Fullstack, AI Automation framework made for users.☆229Updated last month
- A fully in-browser privacy solution to make Conversational AI privacy-friendly☆225Updated 3 months ago
- 💬 Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals client☆308Updated 9 months ago
- A simple Web / UI / App / Frontend to Ollama.☆81Updated 10 months ago
- A GUI interface for Ollama☆313Updated 3 months ago
- An autonomous AI agent extension for Oobabooga's web ui☆176Updated last year