randaller / llama-cpuLinks
Inference on CPU code for LLaMA models
☆137Updated 2 years ago
Alternatives and similar repositories for llama-cpu
Users that are interested in llama-cpu are comparing it to the libraries listed below
Sorting:
- Inference code for facebook LLaMA models with Wrapyfi support☆129Updated 2 years ago
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆411Updated 2 years ago
- Python bindings for llama.cpp☆198Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆108Updated 2 years ago
- Embeddings focused small version of Llama NLP model☆106Updated 2 years ago
- 💬 Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals client☆316Updated last year
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆129Updated 2 years ago
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated 2 years ago
- C++ implementation for 💫StarCoder☆457Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆65Updated 2 years ago
- Visual Studio Code extension for WizardCoder☆148Updated 2 years ago
- A collection of prompts for Llama☆100Updated 2 years ago
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆90Updated 2 years ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆160Updated 2 years ago
- SoTA Transformers with C-backend for fast inference on your CPU.☆308Updated last year
- Merge Transformers language models by use of gradient parameters.☆208Updated last year
- Inference code for LLaMA models☆42Updated 2 years ago
- ☆534Updated last year
- Framework agnostic python runtime for RWKV models☆146Updated 2 years ago
- Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.☆149Updated 2 years ago
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆182Updated 2 months ago
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆52Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆313Updated last year
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆38Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆63Updated 2 years ago
- minichatgpt - To Train ChatGPT In 5 Minutes☆169Updated 2 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago