shawwn / llamaLinks
Inference code for LLaMA models
☆189Updated 2 years ago
Alternatives and similar repositories for llama
Users that are interested in llama are comparing it to the libraries listed below
Sorting:
- ☆535Updated last year
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆324Updated 2 years ago
- Extend the original llama.cpp repo to support redpajama model.☆118Updated 11 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆124Updated 2 years ago
- Inference code for facebook LLaMA models with Wrapyfi support☆129Updated 2 years ago
- OpenAI API webserver☆189Updated 3 years ago
- Falcon LLM ggml framework with CPU and GPU support☆247Updated last year
- SoTA Transformers with C-backend for fast inference on your CPU.☆309Updated last year
- ☆460Updated last year
- Inference code for LLaMA models☆46Updated 2 years ago
- ☆405Updated 2 years ago
- Nearly a thousand bash and python scripts I've written over the years.☆124Updated 7 months ago
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆413Updated 2 years ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆110Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆146Updated 2 years ago
- C++ implementation for BLOOM☆809Updated 2 years ago
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆71Updated 2 years ago
- Quantized inference code for LLaMA models☆1,050Updated 2 years ago
- LLM that combines the principles of wizardLM and vicunaLM☆717Updated 2 years ago
- Instruct-tune LLaMA on consumer hardware☆362Updated 2 years ago
- A repository to run gpt-j-6b on low vram machines (4.2 gb minimum vram for 2000 token context, 3.5 gb for 1000 token context). Model load…☆114Updated 3 years ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- 💬 Chatbot web app + HTTP and Websocket endpoints for LLM inference with the Petals client☆314Updated last year
- Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on HuggingFace with 8-bit or 4-bit quantization. Research only.☆150Updated 2 years ago
- Drop in replacement for OpenAI, but with Open models.☆152Updated 2 years ago
- Fork of Facebooks LLaMa model to run on CPU☆772Updated 2 years ago
- JS tokenizer for LLaMA 1 and 2☆357Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated 2 years ago
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆177Updated this week
- Supercharge Open-Source AI Models☆351Updated 2 years ago