rickardp / bitsandbytesLinks
8-bit CUDA functions for PyTorch
☆18Updated last year
Alternatives and similar repositories for bitsandbytes
Users that are interested in bitsandbytes are comparing it to the libraries listed below
Sorting:
- Port of Suno's Bark TTS transformer in Apple's MLX Framework☆86Updated last year
- ☆38Updated last year
- A simple Jupyter Notebook for learning MLX text-completion fine-tuning!☆123Updated last year
- Scripts to create your own moe models using mlx☆90Updated last year
- Experimental LLM Inference UX to aid in creative writing☆127Updated last year
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆43Updated 7 months ago
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆100Updated 7 months ago
- Minimal, clean code implementation of RAG with mlx using gguf model weights☆53Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- Examples of apps built with Nendo, the AI Audio Tool Suite☆55Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆249Updated 2 years ago
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆132Updated last month
- ☆54Updated 2 years ago
- Let's create synthetic textbooks together :)☆76Updated 2 years ago
- Information on optimizing python libraries specifically for oobabooga to take advantage of Apple Silicon and Accelerate Framework.☆77Updated 11 months ago
- Grammar checker with a keyboard shortcut for Ollama and Apple MLX with Automator on macOS.☆82Updated last year
- Command-line script for inferencing from models such as falcon-7b-instruct☆75Updated 2 years ago
- GPT-2 small trained on phi-like data☆68Updated last year
- Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon☆273Updated 2 months ago
- Distributed Inference for mlx LLm☆100Updated last year
- A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.☆260Updated 3 months ago
- A fast minimalistic implementation of guided generation on Apple Silicon using Outlines and MLX☆59Updated last year
- ☆32Updated 2 years ago
- For inferring and serving local LLMs using the MLX framework☆110Updated last year
- Client-side toolkit for using large language models, including where self-hosted☆115Updated last month
- Simple LLM inference server☆20Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated 4 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- cli tool to quantize gguf, gptq, awq, hqq and exl2 models☆78Updated last year