mark-lord / MLX-text-completion-notebookLinks
A simple Jupyter Notebook for learning MLX text-completion fine-tuning!
☆120Updated 9 months ago
Alternatives and similar repositories for MLX-text-completion-notebook
Users that are interested in MLX-text-completion-notebook are comparing it to the libraries listed below
Sorting:
- For inferring and serving local LLMs using the MLX framework☆109Updated last year
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆278Updated 2 months ago
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆261Updated 2 months ago
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆271Updated 11 months ago
- Generate train.jsonl and valid.jsonl files to use for fine-tuning Mistral and other LLMs.☆97Updated last year
- Explore a simple example of utilizing MLX for RAG application running locally on your Apple Silicon device.☆174Updated last year
- FastMLX is a high performance production ready API to host MLX models.☆325Updated 5 months ago
- Dataset Crafting w/ RAG/Wikipedia ground truth and Efficient Fine-Tuning Using MLX and Unsloth. Includes configurable dataset annotation …☆184Updated last year
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆129Updated 2 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆92Updated last month
- Fast parallel LLM inference for MLX☆206Updated last year
- Start a server from the MLX library.☆191Updated last year
- Distributed Inference for mlx LLm☆93Updated last year
- Client-side toolkit for using large language models, including where self-hosted☆113Updated 9 months ago
- Minimal, clean code implementation of RAG with mlx using gguf model weights☆52Updated last year
- ☆161Updated 2 weeks ago
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆80Updated 11 months ago
- Grammar checker with a keyboard shortcut for Ollama and Apple MLX with Automator on macOS.☆82Updated last year
- Experimental LLM Inference UX to aid in creative writing☆120Updated 8 months ago
- function calling-based LLM agents☆288Updated 11 months ago
- MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.☆193Updated last week
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated 11 months ago
- ☆95Updated this week
- ☆116Updated 8 months ago
- Gradio based tool to run opensource LLM models directly from Huggingface☆94Updated last year
- ☆132Updated 3 months ago
- Large Language Models (LLMs) applications and tools running on Apple Silicon in real-time with Apple MLX.☆453Updated 6 months ago
- ☆314Updated 3 weeks ago
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆116Updated last year