jackcook / predictive-spy
Spying on Apple’s new predictive text model
☆137Updated last year
Alternatives and similar repositories for predictive-spy
Users that are interested in predictive-spy are comparing it to the libraries listed below
Sorting:
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆79Updated 9 months ago
- Run transformers (incl. LLMs) on the Apple Neural Engine.☆61Updated last year
- run embeddings in MLX☆88Updated 7 months ago
- Benchmark of Apple MLX operations on all Apple Silicon chips (GPU, CPU) + MPS and CUDA.☆178Updated last month
- LLM plugin for running models using MLC☆186Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆206Updated last year
- MacOS background screen recorder/reader for easy history search☆91Updated 2 years ago
- Run GGML models with Kubernetes.☆173Updated last year
- An implementation of bucketMul LLM inference☆217Updated 10 months ago
- mlx implementations of various transformers, speedups, training☆34Updated last year
- Grammar checker with a keyboard shortcut for Ollama and Apple MLX with Automator on macOS.☆80Updated last year
- MLX Swift implementation of Andrej Karpathy's Let's build GPT video☆57Updated last year
- ☆56Updated 2 years ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- GGUF implementation in C as a library and a tools CLI program☆270Updated 4 months ago
- Start a server from the MLX library.☆185Updated 9 months ago
- Finetune llama2-70b and codellama on MacBook Air without quantization☆448Updated last year
- Port of Suno's Bark TTS transformer in Apple's MLX Framework☆81Updated last year
- Port of Facebook's LLaMA model in C/C++☆45Updated last year
- Scripts to create your own moe models using mlx☆89Updated last year
- LLaVA server (llama.cpp).☆180Updated last year
- A simple package to use CLIP on apple silicon using the MLX libraries from Apple☆69Updated last year
- CLI to demonstrate running a large language model (LLM) on Apple Neural Engine.☆102Updated 4 months ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆169Updated last year
- Implementation of nougat that focuses on processing pdf locally.☆81Updated 3 months ago
- ☆113Updated 3 months ago
- For inferring and serving local LLMs using the MLX framework☆103Updated last year
- System prompts from Apple's new Apple Intelligence on MacOS Sequoia☆185Updated 4 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆150Updated 3 weeks ago
- ☆168Updated last month