vithursant / nanoGPT_mlxLinks
Port of Andrej Karpathy's nanoGPT to Apple MLX framework.
☆112Updated last year
Alternatives and similar repositories for nanoGPT_mlx
Users that are interested in nanoGPT_mlx are comparing it to the libraries listed below
Sorting:
- ☆47Updated last year
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated this week
- An example implementation of RLHF (or, more accurately, RLAIF) built on MLX and HuggingFace.☆32Updated last year
- run embeddings in MLX☆91Updated 10 months ago
- MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers an…☆66Updated 9 months ago
- ☆88Updated last year
- Benchmarks comparing PyTorch and MLX on Apple Silicon GPUs☆88Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- look how they massacred my boy☆63Updated 10 months ago
- Simple GRPO scripts and configurations.☆59Updated 6 months ago
- inference code for mixtral-8x7b-32kseqlen☆101Updated last year
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆63Updated 9 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 5 months ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆173Updated last year
- An introduction to LLM Sampling☆79Updated 8 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Fast parallel LLM inference for MLX☆206Updated last year
- smolLM with Entropix sampler on pytorch☆150Updated 9 months ago
- Train your own SOTA deductive reasoning model☆104Updated 5 months ago
- An automated tool for discovering insights from research papaer corpora☆138Updated last year
- A collection of optimizers for MLX☆50Updated last week
- Scripts to create your own moe models using mlx☆90Updated last year
- ☆111Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 9 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- ☆116Updated 8 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 6 months ago
- Plotting (entropy, varentropy) for small LMs☆98Updated 3 months ago
- KMD is a collection of conversational exchanges between patients and doctors on various medical topics. It aims to capture the intricaci…☆24Updated last year
- A reinforcement learning framework based on MLX.☆235Updated 6 months ago