unixwzrd / oobabooga-macOS
Optimizing performance, building and installing packages required for oobabooga, AI and Data Science on Apple Silicon GPU. The goal is to optimize wherever possible, from the ground up.
☆64Updated this week
Related projects ⓘ
Alternatives and complementary repositories for oobabooga-macOS
- A macOS version of the oobabooga gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA…☆21Updated this week
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆91Updated last year
- AutoNL - Natural Language Automation tool☆83Updated 8 months ago
- After my server ui improvements were successfully merged, consider this repo a playground for experimenting, tinkering and hacking around…☆56Updated 3 months ago
- RetroChat is a powerful command-line interface for interacting with various AI language models. It provides a seamless experience for eng…☆66Updated 2 months ago
- Easily create LLM automation/agent workflows☆55Updated 9 months ago
- For inferring and serving local LLMs using the MLX framework☆89Updated 7 months ago
- Use local llama LLM or openai to chat, discuss/summarize your documents, youtube videos, and so on.☆152Updated 6 months ago
- An extension that lets the AI take the wheel, allowing it to use the mouse and keyboard, recognize UI elements, and prompt itself :3...no…☆96Updated last month
- A simple script to enhance text editing across your Mac, leveraging the power of MLX. Designed for seamless integration, it offers real-t…☆105Updated 8 months ago
- Client-side toolkit for using large language models, including where self-hosted☆102Updated this week
- SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.☆226Updated this week
- A guidance compatibility layer for llama-cpp-python☆34Updated last year
- An API for VoiceCraft.☆26Updated 4 months ago
- Local LLM inference & management server with built-in OpenAI API☆31Updated 7 months ago
- This code implements a Local LLM Selector from the list of Local Installed Ollama LLMs for your specific user Query☆103Updated 11 months ago
- A simple speech-to-text and text-to-speech AI chatbot that can be run fully offline.☆42Updated 9 months ago
- Experimental LLM Inference UX to aid in creative writing☆106Updated 4 months ago
- PyPlexitas is an open-source Python CLI alternative to Perplexity AI, designed to perform web searches, scrape content, generate embeddin…☆34Updated 5 months ago
- Grammar checker with a keyboard shortcut for Ollama and Apple MLX with Automator on macOS.☆76Updated 9 months ago
- Text generation in Python, as easy as possible☆42Updated this week
- Python bindings for the C++ port of GPT4All-J model.☆38Updated last year
- A simple updated colab doc that will allow you to run the Ooba Booga Text-Generation-Webui for free with just a few lines of codes.☆22Updated last month
- Creates an Langchain Agent which uses the WebUI's API and Wikipedia to work☆73Updated last year
- ☆31Updated 10 months ago
- A web-app to explore topics using LLM (less typing and more clicks)☆65Updated 10 months ago
- A simple experiment on letting two local LLM have a conversation about anything!☆92Updated 4 months ago
- SmartGPT is a Node.js webpage implementation of a dynamic prompting system, inspired by [AI Explained](https://www.youtube.com/@ai-explai…☆63Updated 7 months ago