jackcook / predictive-spy
Spying on Apple’s new predictive text model
☆136Updated last year
Alternatives and similar repositories for predictive-spy:
Users that are interested in predictive-spy are comparing it to the libraries listed below
- mlx implementations of various transformers, speedups, training☆34Updated last year
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆70Updated 7 months ago
- run embeddings in MLX☆82Updated 4 months ago
- Run transformers (incl. LLMs) on the Apple Neural Engine.☆57Updated last year
- Start a server from the MLX library.☆172Updated 6 months ago
- Implement recursion using English as the programming language and an LLM as the runtime.☆136Updated last year
- Mistral7B playing DOOM☆127Updated 7 months ago
- Grammar checker with a keyboard shortcut for Ollama and Apple MLX with Automator on macOS.☆77Updated last year
- A Swift library that runs Alpaca prediction locally to implement ChatGPT like app on Apple platform devices.☆94Updated last year
- Implementation of nougat that focuses on processing pdf locally.☆79Updated last month
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆96Updated 3 months ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆156Updated last year
- System prompts from Apple's new Apple Intelligence on MacOS Sequoia☆167Updated last month
- Save OpenAI API results to a SQLite database☆231Updated 9 months ago
- FlashAttention (Metal Port)☆435Updated 4 months ago
- An implementation of bucketMul LLM inference☆215Updated 7 months ago
- Visualizing the internal board state of a GPT trained on chess PGN strings, and performing interventions on its internal board state and …☆199Updated 2 months ago
- ☆111Updated 2 weeks ago
- Tiny inference-only implementation of LLaMA☆92Updated 10 months ago
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆105Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆183Updated 9 months ago
- Fast parallel LLM inference for MLX☆162Updated 7 months ago
- Run GGML models with Kubernetes.☆174Updated last year
- Count and truncate text based on tokens☆293Updated 9 months ago
- Finetune llama2-70b and codellama on MacBook Air without quantization☆447Updated 10 months ago
- Easy-to-Use Apple Vision wrapper for text extraction, scalar representation and clustering using K-means.☆99Updated last year
- Port of Facebook's LLaMA model in C/C++☆45Updated last year
- ☆135Updated last month