Mozilla-Ocho / Memory-CacheLinks
MemoryCache is an experimental development project to turn a local desktop environment into an on-device AI agent
☆562Updated last year
Alternatives and similar repositories for Memory-Cache
Users that are interested in Memory-Cache are comparing it to the libraries listed below
Sorting:
- A toolkit for applying LLMs to sensitive, non-public data in offline or restricted environments☆807Updated this week
- MemoryCache is an experimental development project to turn a local desktop environment into an on-device AI agent☆27Updated last year
- From anywhere you can type, query and stream the output of any script (e.g. an LLM)☆502Updated last year
- ☆748Updated last year
- Marsha is a functional, higher-level, English-based programming language that gets compiled into tested Python software by an LLM☆468Updated 2 years ago
- A minimal Python package for storing and retrieving text using chunking, embeddings, and vector search.☆776Updated last year
- This project collects GPU benchmarks from various cloud providers and compares them to fixed per token costs. Use our tool for efficient …☆222Updated last year
- Agents Capable of Self-Editing Their Prompts / Python Code☆799Updated last year
- A program synthesis agent that autonomously fixes its output by running tests!☆468Updated last year
- Finetune llama2-70b and codellama on MacBook Air without quantization☆450Updated last year
- LLM plugin providing access to Mistral models using the Mistral API☆205Updated 5 months ago
- Finetune a LLM to speak like you based on your WhatsApp Conversations☆374Updated last year
- Instruct-tune LLaMA on consumer hardware☆362Updated 2 years ago
- LLM Analytics☆704Updated last year
- A FastAPI service for semantic text search using precomputed embeddings and advanced similarity measures, with built-in support for vario…☆1,039Updated 10 months ago
- Prompt engineering for developers☆695Updated last year
- Large language model evaluation and workflow framework from Phase AI.☆459Updated 11 months ago
- ☆298Updated 9 months ago
- Neum AI is a best-in-class framework to manage the creation and synchronization of vector embeddings at large scale.☆864Updated last year
- Enforce structured output from LLMs 100% of the time☆249Updated last year
- AI-managed code blocks in Python ⏪⏩☆468Updated 2 years ago
- Action library for AI Agent☆229Updated 9 months ago
- Replace OpenAI with Llama.cpp Automagically.☆326Updated last year
- Radient turns many data types (not just text) into vectors for similarity search, RAG, regression analysis, and more.☆281Updated 2 weeks ago
- Minimal Python library to connect to LLMs (OpenAI, Anthropic, Google, Groq, Reka, Together, AI21, Cohere, Aleph Alpha, HuggingfaceHub), w…☆812Updated last week
- Implement recursion using English as the programming language and an LLM as the runtime.☆239Updated 2 years ago
- Count and truncate text based on tokens☆380Updated last year
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆287Updated 3 months ago
- A simple "Be My Eyes" web app with a llama.cpp/llava backend☆492Updated 2 years ago
- LLM plugin providing access to models running on an Ollama server☆346Updated 2 weeks ago