shawwn / llama
Inference code for LLaMA models
☆188Updated last year
Alternatives and similar repositories for llama:
Users that are interested in llama are comparing it to the libraries listed below
- ☆407Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆246Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- ☆536Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆117Updated 5 months ago
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆324Updated last year
- Inference code for facebook LLaMA models with Wrapyfi support☆130Updated last year
- Fork of Facebooks LLaMa model to run on CPU☆772Updated last year
- Quantized inference code for LLaMA models☆1,052Updated last year
- Inference code for LLaMA models☆46Updated last year
- ☆456Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆311Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆111Updated last year
- Inference code for LLaMA models☆35Updated last year
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated last year
- A collection of prompts for Llama☆97Updated last year
- Framework agnostic python runtime for RWKV models☆145Updated last year
- OpenAI API webserver☆183Updated 3 years ago
- C++ implementation for BLOOM☆810Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆51Updated last year
- Chat with Meta's LLaMA models at home made easy☆834Updated last year
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆31Updated last year
- LLM that combines the principles of wizardLM and vicunaLM☆713Updated last year
- C++ implementation for 💫StarCoder☆450Updated last year
- JS tokenizer for LLaMA 1 and 2☆349Updated 7 months ago
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆408Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- rwkv_chatbot☆62Updated 2 years ago
- Run Alpaca LLM in LangChain☆218Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆818Updated last year