GusLovesMath / Local_LLM_Training_Apple_SiliconLinks
Created and enhanced a local LLM training system on Apple Silicon with MLX and Metal API, overcoming the absence of CUDA support. Fine-tuned the Llama3 model on 16 GPUs for streamlined solution of verbose math word problems. Result: a powerful, privacy-preserving chatbot that runs smoothly on-device.
☆22Updated last year
Alternatives and similar repositories for Local_LLM_Training_Apple_Silicon
Users that are interested in Local_LLM_Training_Apple_Silicon are comparing it to the libraries listed below
Sorting:
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆123Updated last year
- Gradio based tool to run opensource LLM models directly from Huggingface☆97Updated last year
- Rivet plugin for integration with Ollama, the tool for running LLMs locally easily☆43Updated 8 months ago
- 🧠 Retrieval Augmented Generation (RAG) example☆19Updated 5 months ago
- Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).☆105Updated last year
- Your gateway to both Ollama & Apple MlX models☆150Updated 11 months ago
- For inferring and serving local LLMs using the MLX framework☆110Updated last year
- Generate train.jsonl and valid.jsonl files to use for fine-tuning Mistral and other LLMs.☆96Updated 2 years ago
- Unsloth Studio☆126Updated 10 months ago
- Minimal, clean code implementation of RAG with mlx using gguf model weights☆53Updated last year
- ☆109Updated 5 months ago
- run ollama & gguf easily with a single command☆52Updated last year
- Local character AI chatbot with chroma vector store memory and some scripts to process documents for Chroma☆34Updated last year
- My version of an LLM Websearch Agent using a local SearXNG server because SearXNG is great.☆39Updated last week
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆119Updated last year
- Transcribe and summarize videos using whisper and llms on apple mlx framework☆77Updated 2 years ago
- Simple GUI to load a PDF/Docx/txt file and have LM Studio Answer based off of it.☆14Updated last year
- Serving LLMs in the HF-Transformers format via a PyFlask API☆72Updated last year
- This small API downloads and exposes access to NeuML's txtai-wikipedia and full wikipedia datasets, taking in a query and returning full …☆103Updated 5 months ago
- A simple experiment on letting two local LLM have a conversation about anything!☆112Updated last year
- Dataset Crafting w/ RAG/Wikipedia ground truth and Efficient Fine-Tuning Using MLX and Unsloth. Includes configurable dataset annotation …☆193Updated last year
- Function Calling Mistral 7B. Learn how to make functions call for open source LLMs.☆48Updated last year
- Link you Ollama models to LM-Studio☆150Updated last year
- RAG example using DSPy, Gradio, FastAPI☆90Updated last year
- 😎 Awesome list of tools and projects with the awesome LangChain framework☆19Updated 2 years ago
- ☆34Updated last year
- Realtime tts reading of large textfiles by your favourite voice. +Translation via LLM (Python script)☆52Updated last year
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆43Updated 7 months ago
- Local LLMs in One Line Of Code (thanks to llamafile)☆45Updated 2 years ago
- An MCP server that provides LLMs access to other LLMs☆76Updated 10 months ago