mzbac / mlx-moeLinks
Scripts to create your own moe models using mlx
☆90Updated last year
Alternatives and similar repositories for mlx-moe
Users that are interested in mlx-moe are comparing it to the libraries listed below
Sorting:
- ☆116Updated 8 months ago
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆42Updated 2 months ago
- ☆38Updated last year
- Transcribe and summarize videos using whisper and llms on apple mlx framework☆75Updated last year
- ☆67Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloading☆38Updated last year
- inference code for mixtral-8x7b-32kseqlen☆101Updated last year
- huggingface chat-ui integration with mlx-lm server☆61Updated last year
- GRDN.AI app for garden optimization☆70Updated last year
- ☆161Updated last month
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆64Updated last year
- Minimal, clean code implementation of RAG with mlx using gguf model weights☆52Updated last year
- ☆102Updated last year
- All the world is a play, we are but actors in it.☆50Updated last month
- Let's create synthetic textbooks together :)☆75Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆95Updated 2 months ago
- Cerule - A Tiny Mighty Vision Model☆68Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Video+code lecture on building nanoGPT from scratch☆69Updated last year
- ☆74Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated last year
- Distributed Inference for mlx LLm☆95Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 6 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆176Updated last year
- For inferring and serving local LLMs using the MLX framework☆109Updated last year
- ☆28Updated last year
- tiny_fnc_engine is a minimal python library that provides a flexible engine for calling functions extracted from a LLM.☆38Updated last year
- Port of Andrej Karpathy's nanoGPT to Apple MLX framework.☆112Updated last year