jackcook / predictive-spyLinks
Spying on Apple’s new predictive text model
☆134Updated last year
Alternatives and similar repositories for predictive-spy
Users that are interested in predictive-spy are comparing it to the libraries listed below
Sorting:
- MLX Swift implementation of Andrej Karpathy's Let's build GPT video☆59Updated last year
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆88Updated last year
- mlx implementations of various transformers, speedups, training☆33Updated last year
- run embeddings in MLX☆93Updated last year
- An implementation of bucketMul LLM inference☆223Updated last year
- Run GGML models with Kubernetes.☆173Updated last year
- ☆116Updated 8 months ago
- 1.58 Bit LLM on Apple Silicon using MLX☆223Updated last year
- A library for incremental loading of large PyTorch checkpoints☆56Updated 2 years ago
- Horizon chart for CPU/GPU/Neural Engine utilization monitoring. Supports Apple M1-M4, Nvidia GPUs, AMD GPUs☆26Updated 2 months ago
- Implement recursion using English as the programming language and an LLM as the runtime.☆236Updated 2 years ago
- Mistral7B playing DOOM☆137Updated last year
- Save OpenAI API results to a SQLite database☆235Updated last year
- System prompts from Apple's new Apple Intelligence on MacOS Sequoia☆198Updated 9 months ago
- LLaVA server (llama.cpp).☆183Updated last year
- Finetune llama2-70b and codellama on MacBook Air without quantization☆449Updated last year
- Grammar checker with a keyboard shortcut for Ollama and Apple MLX with Automator on macOS.☆82Updated last year
- Tiny inference-only implementation of LLaMA☆92Updated last year
- Implementation of nougat that focuses on processing pdf locally.☆83Updated 8 months ago
- Chat Markup Language conversation library☆55Updated last year
- LLM plugin for running models using MLC☆191Updated last year
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆128Updated last month
- WebGPU LLM inference tuned by hand☆150Updated 2 years ago
- Start a server from the MLX library.☆192Updated last year
- Drop in replacement for OpenAI, but with Open models.☆153Updated 2 years ago
- Fine-tune a large language model on your own iMessages☆117Updated 2 years ago
- Port of Facebook's LLaMA model in C/C++☆45Updated 2 years ago
- Clone your friends with iMessage and MLX☆33Updated last year
- For inferring and serving local LLMs using the MLX framework☆109Updated last year
- ☆254Updated 2 years ago