vithursant / nanoGPT_mlxLinks
Port of Andrej Karpathy's nanoGPT to Apple MLX framework.
☆107Updated last year
Alternatives and similar repositories for nanoGPT_mlx
Users that are interested in nanoGPT_mlx are comparing it to the libraries listed below
Sorting:
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆80Updated last month
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆85Updated 11 months ago
- run embeddings in MLX☆90Updated 8 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 10 months ago
- MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers an…☆66Updated 7 months ago
- look how they massacred my boy☆63Updated 8 months ago
- ☆47Updated last year
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆64Updated 7 months ago
- Simple GRPO scripts and configurations.☆58Updated 4 months ago
- ☆38Updated last year
- An example implementation of RLHF (or, more accurately, RLAIF) built on MLX and HuggingFace.☆29Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 4 months ago
- A collection of optimizers for MLX☆36Updated 3 weeks ago
- ☆63Updated 3 weeks ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆101Updated 3 months ago
- inference code for mixtral-8x7b-32kseqlen☆100Updated last year
- Train Large Language Models on MLX.☆91Updated this week
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Simple Transformer in Jax☆137Updated last year
- ☆87Updated last year
- smolLM with Entropix sampler on pytorch☆150Updated 7 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- smol models are fun too☆93Updated 7 months ago
- Fast parallel LLM inference for MLX☆192Updated 11 months ago
- Train your own SOTA deductive reasoning model☆94Updated 3 months ago
- ☆114Updated 6 months ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆171Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆66Updated 2 months ago
- Scripts to create your own moe models using mlx☆90Updated last year