gpt-2 from scratch in mlx
☆423Jun 12, 2024Updated last year
Alternatives and similar repositories for mlx-gpt2
Users that are interested in mlx-gpt2 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆957Nov 16, 2025Updated 5 months ago
- MLX Swift implementation of Andrej Karpathy's Let's build GPT video☆63Apr 14, 2024Updated 2 years ago
- Implementation of Diffusion Transformer (DiT) in JAX☆311Jun 11, 2024Updated last year
- alternative way to calculating self attention☆18May 25, 2024Updated last year
- ☆15Apr 26, 2025Updated 11 months ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- On-device Image Generation for Apple Silicon☆701Apr 11, 2025Updated last year
- A reinforcement learning framework based on MLX.☆254Dec 1, 2025Updated 4 months ago
- Simple Byte pair Encoding mechanism used for tokenization process . written purely in C☆148Nov 11, 2024Updated last year
- MLX native implementations of state-of-the-art generative image models☆1,995Apr 10, 2026Updated last week
- Google+ Blog☆15Oct 9, 2011Updated 14 years ago
- Video+code lecture on building nanoGPT from scratch☆67Jun 14, 2024Updated last year
- Pure Python version of the mlabwrap Python to Matlab bridge☆31Nov 21, 2019Updated 6 years ago
- ☆12Jun 2, 2023Updated 2 years ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆276Nov 9, 2025Updated 5 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Karpathy's llama2.c transpiled to MLX for Apple Silicon☆14Dec 28, 2023Updated 2 years ago
- JAX implementation ViT-VQGAN☆63Jul 23, 2022Updated 3 years ago
- Just a bunch of benchmark logs for different LLMs☆121Jul 28, 2024Updated last year
- FastMLX is a high performance production ready API to host MLX models.☆352Mar 18, 2025Updated last year
- ☆17Apr 3, 2026Updated 2 weeks ago
- Fast parallel LLM inference for MLX☆249Jul 7, 2024Updated last year
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆461Jan 29, 2025Updated last year
- Teardown of Google Glass☆39Jan 11, 2014Updated 12 years ago
- Implementation of nougat that focuses on processing pdf locally.☆85Jan 15, 2025Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- run embeddings in MLX☆98Sep 27, 2024Updated last year
- fastest vector database made in numpy☆763Oct 9, 2025Updated 6 months ago
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆40Aug 2, 2023Updated 2 years ago
- Fine-tune mistral-7B on 3090s, a100s, h100s☆726Oct 11, 2023Updated 2 years ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆180Jan 31, 2024Updated 2 years ago
- Project code for training LLMs to write better unit tests + code☆21May 19, 2025Updated 10 months ago
- Run PyTorch LLMs locally on servers, desktop and mobile☆3,624Sep 10, 2025Updated 7 months ago
- ☆25May 23, 2025Updated 10 months ago
- minimalist vector ad☆11Feb 11, 2024Updated 2 years ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- tiny_fnc_engine is a minimal python library that provides a flexible engine for calling functions extracted from a LLM.☆38Sep 11, 2024Updated last year
- MLX: An array framework for Apple silicon☆25,423Updated this week
- Simple Transformer in Jax☆143Jun 22, 2024Updated last year
- Examples in the MLX framework☆8,498Apr 6, 2026Updated last week
- llama3 implementation one matrix multiplication at a time☆15,241May 23, 2024Updated last year
- Run large models from the terminal using Apple MLX.☆31Mar 18, 2024Updated 2 years ago
- ☆308Jul 15, 2024Updated last year