markasoftware / llama-cpu
Fork of Facebooks LLaMa model to run on CPU
☆773Updated 2 years ago
Alternatives and similar repositories for llama-cpu:
Users that are interested in llama-cpu are comparing it to the libraries listed below
- Quantized inference code for LLaMA models☆1,052Updated 2 years ago
- Simple UI for LLM Model Finetuning☆2,061Updated last year
- C++ implementation for BLOOM☆809Updated last year
- ☆405Updated 2 years ago
- ☆1,465Updated last year
- Llama 2 Everywhere (L2E)☆1,516Updated 2 months ago
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆324Updated 2 years ago
- Chat with Meta's LLaMA models at home made easy☆834Updated 2 years ago
- Run LLaMA (and Stanford-Alpaca) inference on Apple Silicon GPUs.☆585Updated 2 years ago
- A school for camelids☆1,208Updated last year
- Instruct-tune LLaMA on consumer hardware☆362Updated last year
- MiniLLM is a minimal system for running modern LLMs on consumer-grade GPUs☆903Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆311Updated last year
- Inference code for LLaMA models☆188Updated 2 years ago
- Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Tra…☆1,298Updated last year
- ☆458Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆44Updated last year
- C++ implementation for 💫StarCoder☆453Updated last year
- Alpaca dataset from Stanford, cleaned and curated☆1,545Updated last year
- Finetune llama2-70b and codellama on MacBook Air without quantization☆448Updated last year
- 4 bits quantization of LLaMA using GPTQ☆3,046Updated 8 months ago
- Official supported Python bindings for llama.cpp + gpt4all☆1,020Updated last year
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆408Updated last year
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.☆760Updated 5 months ago
- ☆535Updated last year
- A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer☆1,628Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,464Updated 7 months ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆820Updated last year
- Structured and typehinted GPT responses in Python☆735Updated 8 months ago
- High-speed download of LLaMA, Facebook's 65B parameter GPT model☆4,162Updated last year