randaller / llama-cpuLinks
Inference on CPU code for LLaMA models
☆136Updated 2 years ago
Alternatives and similar repositories for llama-cpu
Users that are interested in llama-cpu are comparing it to the libraries listed below
Sorting:
- Inference code for facebook LLaMA models with Wrapyfi support☆129Updated 2 years ago
- Python bindings for llama.cpp☆197Updated 2 years ago
- Inference code for LLaMA models☆42Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆122Updated last year
- Inference code for LLaMA models☆35Updated 2 years ago
- 4 bits quantization of SantaCoder using GPTQ☆50Updated 2 years ago
- Falcon LLM ggml framework with CPU and GPU support☆245Updated last year
- LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.☆123Updated last year
- ☆457Updated last year
- Instruct-tuning LLaMA on consumer hardware☆65Updated 2 years ago
- Chat with Meta's LLaMA models at home made easy☆836Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆109Updated last year
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆171Updated last month
- Inference code for LLaMA models☆46Updated 2 years ago
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆408Updated 2 years ago
- SoTA Transformers with C-backend for fast inference on your CPU.☆309Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆63Updated last year
- C++ implementation for BLOOM☆809Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆146Updated last year
- Deploy your GGML models to HuggingFace Spaces with Docker and gradio☆36Updated last year
- ☆534Updated last year
- Merge Transformers language models by use of gradient parameters.☆207Updated 9 months ago
- 4 bits quantization of LLaMa using GPTQ☆129Updated 2 years ago
- 4-bit quantization of models using GPTQ☆18Updated 2 years ago
- GPTQ inference Triton kernel☆300Updated 2 years ago
- Embeddings focused small version of Llama NLP model☆104Updated 2 years ago
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆30Updated 2 years ago
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆90Updated last year
- A dataset featuring diverse dialogues between two ChatGPT (gpt-3.5-turbo) instances with system messages written by GPT-4. Covering vario…☆166Updated 2 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆70Updated 2 years ago