chenhunghan / mlx-training-rsLinks
A CLI in Rust to generate synthetic data for MLX friendly training
☆24Updated last year
Alternatives and similar repositories for mlx-training-rs
Users that are interested in mlx-training-rs are comparing it to the libraries listed below
Sorting:
- Light WebUI for lm.rs☆24Updated 11 months ago
- auto-rust is an experimental project that automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing…☆41Updated 10 months ago
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆95Updated 2 months ago
- LLM based file organizer☆27Updated 2 years ago
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆39Updated 2 years ago
- This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback☆103Updated 6 months ago
- Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.☆62Updated last year
- A library for working with GBNF files☆25Updated last week
- ☆10Updated 2 years ago
- Implementing the BitNet model in Rust☆39Updated last year
- A collection of optimizers for MLX☆52Updated last week
- LLM Divergent Thinking Creativity Benchmark. LLMs generate 25 unique words that start with a given letter with no connections to each oth…☆33Updated 5 months ago
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆42Updated 2 months ago
- AirLLM 70B inference with single 4GB GPU☆14Updated 2 months ago
- ⚡️Lightning fast in-memory VectorDB written in rust🦀☆25Updated 6 months ago
- Implementation of nougat that focuses on processing pdf locally.☆82Updated 8 months ago
- Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and min…☆26Updated 10 months ago
- Easily convert HuggingFace models to GGUF-format for llama.cpp☆22Updated last year
- Run LLaMA inference on CPU, with Rust 🦀🚀🦙☆24Updated 2 years ago
- OpenAI compatible API for serving LLAMA-2 model☆218Updated last year
- Very minimal (and stateless) agent framework☆45Updated 8 months ago
- ☆16Updated last year
- Fast serverless LLM inference, in Rust.☆91Updated 6 months ago
- User friendly CLI tool for AI tasks. Stop thinking about LLMs and prompts, start getting results!☆122Updated 2 weeks ago
- powerful and fast tool calling agents☆55Updated 5 months ago
- 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU☆13Updated last year
- Ask shortgpt for instant and concise answers☆13Updated 2 years ago
- GPU accelerated client-side embeddings for vector search, RAG etc.☆65Updated last year
- ollama like cli tool for MLX models on huggingface (pull, rm, list, show, serve etc.)☆101Updated this week
- Open-source Rewind.ai clone written in Rust and Vue running 100% locally with whisper.cpp☆51Updated 2 years ago