cedrickchee / llamaLinks
Inference code for LLaMA 2 models
☆30Updated last year
Alternatives and similar repositories for llama
Users that are interested in llama are comparing it to the libraries listed below
Sorting:
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- Tune MPTs☆84Updated 2 years ago
- Inference script for Meta's LLaMA models using Hugging Face wrapper☆110Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- Framework agnostic python runtime for RWKV models☆147Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 6 months ago
- Full finetuning of large language models without large memory requirements☆94Updated 2 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- ☆26Updated 2 years ago
- This project aims to make RWKV Accessible to everyone using a Hugging Face like interface, while keeping it close to the R and D RWKV bra…☆65Updated 2 years ago
- Extend the original llama.cpp repo to support redpajama model.☆118Updated last year
- The data processing pipeline for the Koala chatbot language model☆118Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆170Updated last year
- Inference code for facebook LLaMA models with Wrapyfi support☆129Updated 2 years ago
- Instruct-tuning LLaMA on consumer hardware☆66Updated 2 years ago
- ☆78Updated last year
- Fast approximate inference on a single GPU with sparsity aware offloading☆39Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated 2 years ago
- Inference code for LLaMA models with Gradio Interface and rolling generation like ChatGPT☆48Updated 2 years ago
- Code and documentation to train Stanford's Alpaca models, and generate the data.☆110Updated 2 years ago
- RWKV (Receptance Weighted Key Value) is a RNN with Transformer-level performance☆41Updated 2 years ago
- Weekly visualization report of Open LLM model performance based on 4 metrics.☆86Updated 2 years ago
- ☆457Updated 2 years ago
- ☆74Updated 2 years ago
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆41Updated 2 years ago
- ☆81Updated last year
- Experiments on speculative sampling with Llama models☆127Updated 2 years ago
- Merge LLM that are split in to parts☆27Updated 4 months ago
- 4 bits quantization of SantaCoder using GPTQ☆51Updated 2 years ago