vithursant / nanoGPT_mlxLinks
Port of Andrej Karpathy's nanoGPT to Apple MLX framework.
☆116Updated last year
Alternatives and similar repositories for nanoGPT_mlx
Users that are interested in nanoGPT_mlx are comparing it to the libraries listed below
Sorting:
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated 4 months ago
- An example implementation of RLHF (or, more accurately, RLAIF) built on MLX and HuggingFace.☆35Updated last year
- ☆45Updated 2 years ago
- run embeddings in MLX☆96Updated last year
- look how they massacred my boy☆63Updated last year
- MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers an…☆68Updated last year
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆92Updated last year
- inference code for mixtral-8x7b-32kseqlen☆104Updated 2 years ago
- ☆86Updated last year
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆179Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆108Updated 9 months ago
- smolLM with Entropix sampler on pytorch☆149Updated last year
- Implementation of nougat that focuses on processing pdf locally.☆83Updated 11 months ago
- Scripts to create your own moe models using mlx☆90Updated last year
- ☆112Updated 2 years ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- ☆68Updated 7 months ago
- A collection of optimizers for MLX☆54Updated 2 weeks ago
- An introduction to LLM Sampling☆79Updated last year
- Fast parallel LLM inference for MLX☆238Updated last year
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆77Updated 10 months ago
- Plotting (entropy, varentropy) for small LMs☆99Updated 7 months ago
- Train your own SOTA deductive reasoning model☆107Updated 9 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Updated 2 years ago
- Simple GRPO scripts and configurations.☆59Updated 10 months ago
- ☆55Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated 3 months ago
- Simple Transformer in Jax☆141Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year