nomic-ai / pygpt4allView external linksLinks
Official supported Python bindings for llama.cpp + gpt4all
☆1,016May 12, 2023Updated 2 years ago
Alternatives and similar repositories for pygpt4all
Users that are interested in pygpt4all are comparing it to the libraries listed below
Sorting:
- gpt4all-j chat☆1,271May 10, 2023Updated 2 years ago
- Lord of Large Language and Multi modal Systems Web User Interface☆4,772Updated this week
- GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.☆77,136May 27, 2025Updated 8 months ago
- 4 bits quantization of LLaMA using GPTQ☆3,074Jul 13, 2024Updated last year
- Instruct-tune LLaMA on consumer hardware☆18,971Jul 29, 2024Updated last year
- LLM as a Chatbot Service☆3,332Nov 20, 2023Updated 2 years ago
- Python bindings for llama.cpp☆9,971Aug 15, 2025Updated 6 months ago
- Let ChatGPT teach your own chatbot in hours with a single GPU!☆3,167Mar 17, 2024Updated last year
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,402Jun 2, 2025Updated 8 months ago
- Locally run an Assistant-Tuned Chat-Style LLM☆496Apr 12, 2023Updated 2 years ago
- Locally run an Instruction-Tuned Chat-Style LLM☆10,186Apr 19, 2023Updated 2 years ago
- OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset☆7,531Jul 16, 2023Updated 2 years ago
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,477Jun 7, 2025Updated 8 months ago
- Nomic Developer API SDK☆1,868Nov 11, 2025Updated 3 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,514Aug 13, 2024Updated last year
- StableLM: Stability AI Language Models☆15,766Apr 8, 2024Updated last year
- Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Ad…☆6,087Jul 1, 2025Updated 7 months ago
- The definitive Web UI for local AI, with powerful features and easy setup.☆46,037Feb 3, 2026Updated last week
- OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamical…☆37,452Aug 17, 2024Updated last year
- ☆22,126Jan 31, 2026Updated 2 weeks ago
- LLM inference in C/C++☆94,823Updated this week
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,879Jan 28, 2024Updated 2 years ago
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,630Sep 15, 2023Updated 2 years ago
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆30,266Jul 17, 2024Updated last year
- ☆6,238Feb 9, 2026Updated last week
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆412Jun 2, 2023Updated 2 years ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,936Mar 14, 2024Updated last year
- Instruction Tuning with GPT-4☆4,340Jun 11, 2023Updated 2 years ago
- ☆215Apr 13, 2023Updated 2 years ago
- A library of data loaders for LLMs made by the community -- to be used with LlamaIndex and/or LangChain☆3,479Mar 1, 2024Updated last year
- Python bindings for llama.cpp☆198Apr 22, 2023Updated 2 years ago
- Python bindings for llama.cpp☆68Feb 29, 2024Updated last year
- The simplest way to run LLaMA on your local machine☆12,989Jun 18, 2024Updated last year
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,562Mar 23, 2025Updated 10 months ago
- ☆132Apr 23, 2023Updated 2 years ago
- LlamaIndex is the leading framework for building LLM-powered agents over your data.☆46,977Updated this week
- Alpaca dataset from Stanford, cleaned and curated☆1,580Apr 14, 2023Updated 2 years ago
- Universal LLM Deployment Engine with ML Compilation☆22,039Updated this week
- 🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading☆9,930Sep 7, 2024Updated last year