markasoftware / llama-cpu
Fork of Facebooks LLaMa model to run on CPU
☆773Updated 2 years ago
Alternatives and similar repositories for llama-cpu:
Users that are interested in llama-cpu are comparing it to the libraries listed below
- Quantized inference code for LLaMA models☆1,048Updated 2 years ago
- C++ implementation for BLOOM☆810Updated last year
- Simple UI for LLM Model Finetuning☆2,061Updated last year
- Chat with Meta's LLaMA models at home made easy☆833Updated 2 years ago
- Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Tra…☆1,298Updated last year
- ☆1,468Updated last year
- Inference code and configs for the ReplitLM model family☆971Updated last year
- Finetune llama2-70b and codellama on MacBook Air without quantization☆448Updated last year
- Llama 2 Everywhere (L2E)☆1,516Updated 3 months ago
- Official supported Python bindings for llama.cpp + gpt4all☆1,020Updated last year
- ☆405Updated 2 years ago
- A school for camelids☆1,209Updated last year
- Instruct-tune LLaMA on consumer hardware☆362Updated 2 years ago
- 4 bits quantization of LLaMA using GPTQ☆3,052Updated 9 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆44Updated last year
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆324Updated 2 years ago
- MiniLLM is a minimal system for running modern LLMs on consumer-grade GPUs☆904Updated last year
- Inference code for LLaMA models☆188Updated 2 years ago
- Run LLaMA (and Stanford-Alpaca) inference on Apple Silicon GPUs.☆585Updated 2 years ago
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,509Updated last month
- A FastAPI service for semantic text search using precomputed embeddings and advanced similarity measures, with built-in support for vario…☆1,015Updated last month
- LLaMA: Open and Efficient Foundation Language Models☆2,802Updated last year
- Explore large language models in 512MB of RAM☆1,188Updated last month
- ☆1,025Updated last year
- A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI…☆598Updated last year
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆410Updated last year
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆567Updated last year
- C++ implementation for 💫StarCoder☆453Updated last year
- Large language model evaluation and workflow framework from Phase AI.☆457Updated 3 months ago
- The RedPajama-Data repository contains code for preparing large datasets for training large language models.☆4,708Updated 4 months ago