Maknee / minigpt4.cppLinks
Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)
☆568Updated 2 years ago
Alternatives and similar repositories for minigpt4.cpp
Users that are interested in minigpt4.cpp are comparing it to the libraries listed below
Sorting:
- Tiny Dream - An embedded, Header Only, Stable Diffusion C++ implementation☆266Updated 2 years ago
- CLIP inference in plain C/C++ with no extra dependencies☆549Updated 7 months ago
- SoTA Transformers with C-backend for fast inference on your CPU.☆311Updated 2 years ago
- C++ implementation for BLOOM☆809Updated 2 years ago
- throwaway GPT inference☆141Updated last year
- Llama 2 Everywhere (L2E)☆1,526Updated 5 months ago
- ☆1,282Updated 2 years ago
- Wang Yi's GPT solution☆142Updated 2 years ago
- ggml implementation of BERT☆498Updated last year
- This repository contains a pure C++ ONNX implementation of multiple offline AI models, such as StableDiffusion (1.5 and XL), ControlNet, …☆629Updated 8 months ago
- C++ implementation for 💫StarCoder☆459Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆249Updated 2 years ago
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- LLM-based code completion engine☆190Updated last year
- Python bindings for llama.cpp☆198Updated 2 years ago
- GGUF implementation in C as a library and a tools CLI program☆301Updated 5 months ago
- Finetune llama2-70b and codellama on MacBook Air without quantization☆450Updated last year
- WebGPU LLM inference tuned by hand☆151Updated 2 years ago
- A simple "Be My Eyes" web app with a llama.cpp/llava backend☆492Updated 2 years ago
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆412Updated 2 years ago
- Suno AI's Bark model in C/C++ for fast text-to-speech generation☆854Updated last year
- An implementation of bucketMul LLM inference☆224Updated last year
- Instruct-tune LLaMA on consumer hardware☆362Updated 2 years ago
- A BERT that you can train on a (gaming) laptop.☆209Updated 2 years ago
- Inference of Mamba and Mamba2 models in pure C☆196Updated 2 weeks ago
- Fork of Facebooks LLaMa model to run on CPU☆771Updated 2 years ago
- ☆255Updated 2 years ago
- a small code base for training large models☆322Updated 9 months ago
- ☆1,029Updated 2 years ago