noahfarr / rlxLinks
A reinforcement learning framework based on MLX.
β233Updated 4 months ago
Alternatives and similar repositories for rlx
Users that are interested in rlx are comparing it to the libraries listed below
Sorting:
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.β108Updated last year
- Swarming algorithms like PSO, Ant Colony, Sakana, and more in PyTorch πβ123Updated 3 weeks ago
- Fast parallel LLM inference for MLXβ193Updated 11 months ago
- An mlx project to train a base model on your whatsapp chats using (Q)Lora finetuningβ168Updated last year
- Start a server from the MLX library.β187Updated 10 months ago
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.β171Updated last year
- run paligemma in real timeβ131Updated last year
- Simple Transformer in Jaxβ137Updated last year
- The history files when recording human interaction while solving ARC tasksβ111Updated 2 weeks ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.β80Updated last month
- run embeddings in MLXβ90Updated 8 months ago
- β114Updated 6 months ago
- Computer Vision and Machine Learning Jupyter Notebooks for Educational Purposesβ77Updated 6 months ago
- FastMLX is a high performance production ready API to host MLX models.β308Updated 3 months ago
- Benchmark of Apple MLX operations on all Apple Silicon chips (GPU, CPU) + MPS and CUDA.β186Updated 2 weeks ago
- Efficient baselines for autocurricula in JAX.β190Updated 10 months ago
- Efficient framework-agnostic data loadingβ427Updated 2 weeks ago
- A puzzle to learn about promptingβ128Updated 2 years ago
- General multi-task deep RL Agentβ184Updated last year
- An introduction to LLM Samplingβ78Updated 6 months ago
- MLX Transformers is a library that provides model implementation in MLX. It uses a similar model interface as HuggingFace Transformers anβ¦β66Updated 7 months ago
- Cost aware hyperparameter tuning algorithmβ158Updated 11 months ago
- β114Updated 6 months ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines