markasoftware / llama-cpu
Fork of Facebooks LLaMa model to run on CPU
☆772Updated last year
Alternatives and similar repositories for llama-cpu:
Users that are interested in llama-cpu are comparing it to the libraries listed below
- Quantized inference code for LLaMA models☆1,051Updated last year
- Simple UI for LLM Model Finetuning☆2,048Updated last year
- C++ implementation for BLOOM☆810Updated last year
- Instruct-tune LLaMA on consumer hardware☆362Updated last year
- ☆1,447Updated last year
- Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Tra…☆1,292Updated last year
- Official supported Python bindings for llama.cpp + gpt4all☆1,020Updated last year
- A school for camelids☆1,211Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆36Updated last year
- 4 bits quantization of LLaMA using GPTQ☆3,032Updated 6 months ago
- SoTA Transformers with C-backend for fast inference on your CPU.☆312Updated last year
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,454Updated last week
- ☆406Updated last year
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,624Updated last year
- ☆250Updated last year
- Chat with Meta's LLaMA models at home made easy☆834Updated last year
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆562Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,445Updated 5 months ago
- High-speed download of LLaMA, Facebook's 65B parameter GPT model☆4,166Updated last year
- Finetune llama2-70b and codellama on MacBook Air without quantization☆447Updated 10 months ago
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆324Updated last year
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.☆760Updated 3 months ago
- Inference code for LLaMA models☆188Updated last year
- OpenAI-compatible Python client that can call any LLM☆369Updated last year
- Llama 2 Everywhere (L2E)☆1,507Updated 2 weeks ago
- Structured and typehinted GPT responses in Python☆735Updated 6 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,806Updated last year
- Alpaca dataset from Stanford, cleaned and curated☆1,532Updated last year
- C++ implementation for 💫StarCoder☆450Updated last year
- A voice chat app☆1,084Updated 2 months ago