jackcook / predictive-spy
Spying on Apple’s new predictive text model
☆136Updated last year
Alternatives and similar repositories for predictive-spy:
Users that are interested in predictive-spy are comparing it to the libraries listed below
- Finetune llama2-70b and codellama on MacBook Air without quantization☆448Updated last year
- Mistral7B playing DOOM☆130Updated 8 months ago
- Implement recursion using English as the programming language and an LLM as the runtime.☆137Updated last year
- GGUF implementation in C as a library and a tools CLI program☆263Updated 2 months ago
- Run GGML models with Kubernetes.☆174Updated last year
- run embeddings in MLX☆84Updated 6 months ago
- Run transformers (incl. LLMs) on the Apple Neural Engine.☆59Updated last year
- A simple script to enhance text editing across your Mac, leveraging the power of MLX. Designed for seamless integration, it offers real-t…☆104Updated last year
- A feed of trending repos/models from GitHub, Replicate, HuggingFace, and Reddit.☆123Updated 6 months ago
- mlx implementations of various transformers, speedups, training☆34Updated last year
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆260Updated 2 weeks ago
- Extend the original llama.cpp repo to support redpajama model.☆117Updated 6 months ago
- MLX Swift implementation of Andrej Karpathy's Let's build GPT video☆57Updated 11 months ago
- A library for incremental loading of large PyTorch checkpoints☆56Updated 2 years ago
- Port of Suno's Bark TTS transformer in Apple's MLX Framework☆78Updated last year
- A Swift library that runs Alpaca prediction locally to implement ChatGPT like app on Apple platform devices.☆94Updated 2 years ago
- Implementation of nougat that focuses on processing pdf locally.☆80Updated 2 months ago
- ☆153Updated 2 weeks ago
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆76Updated 8 months ago
- Getting GPT-4 to draw a new unicorn every day☆78Updated 10 months ago
- Experimental fork of Facebooks LLaMa model which runs it with GPU acceleration on Apple Silicon M1/M2☆86Updated last year
- Command-line script for inferencing from models such as falcon-7b-instruct☆76Updated last year
- ☆136Updated last year
- LLaVA server (llama.cpp).☆179Updated last year
- Port of Facebook's LLaMA model in C/C++☆45Updated last year
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆112Updated this week
- System prompts from Apple's new Apple Intelligence on MacOS Sequoia☆175Updated 2 months ago
- ☆253Updated last year
- LLM plugin for running models using MLC☆184Updated last year
- Run large models from the terminal using Apple MLX.☆29Updated last year