vithursant / nanoGPT_mlxLinks
Port of Andrej Karpathy's nanoGPT to Apple MLX framework.
☆112Updated last year
Alternatives and similar repositories for nanoGPT_mlx
Users that are interested in nanoGPT_mlx are comparing it to the libraries listed below
Sorting:
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated 2 months ago
- inference code for mixtral-8x7b-32kseqlen☆102Updated last year
- ☆46Updated 2 years ago
- ☆88Updated last year
- MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers an…☆67Updated 11 months ago
- look how they massacred my boy☆63Updated last year
- An example implementation of RLHF (or, more accurately, RLAIF) built on MLX and HuggingFace.☆33Updated last year
- smolLM with Entropix sampler on pytorch☆150Updated 11 months ago
- Full finetuning of large language models without large memory requirements☆93Updated last month
- An introduction to LLM Sampling☆79Updated 10 months ago
- run embeddings in MLX☆94Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆107Updated 7 months ago
- Simple GRPO scripts and configurations.☆59Updated 8 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆89Updated last year
- ☆68Updated 5 months ago
- ☆40Updated last year
- Train your own SOTA deductive reasoning model☆108Updated 7 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆62Updated 11 months ago
- Scripts to create your own moe models using mlx☆90Updated last year
- An automated tool for discovering insights from research papaer corpora☆138Updated last year
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆177Updated last year
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆72Updated 8 months ago
- Just a bunch of benchmark logs for different LLMs☆118Updated last year
- ☆112Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated last week
- ☆38Updated last year
- Fast parallel LLM inference for MLX☆223Updated last year
- ☆136Updated last year
- ☆54Updated last year