ypeleg / llamaLinks
User-friendly LLaMA: Train or Run the model using PyTorch. Nothing else.
☆339Updated 2 years ago
Alternatives and similar repositories for llama
Users that are interested in llama are comparing it to the libraries listed below
Sorting:
- ☆458Updated last year
- Tune any FALCON in 4-bit☆467Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆819Updated 2 years ago
- Repo for fine-tuning Casual LLMs☆456Updated last year
- ☆535Updated last year
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆352Updated last year
- ☆406Updated 2 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆456Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆423Updated last year
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆290Updated last year
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆724Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆692Updated 10 months ago
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,005Updated 10 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆629Updated last year
- Quantized inference code for LLaMA models☆1,051Updated 2 years ago
- Chat with Meta's LLaMA models at home made easy☆836Updated 2 years ago
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆500Updated last year
- An open collection of implementation tips, tricks and resources for training large language models☆475Updated 2 years ago
- Alpaca dataset from Stanford, cleaned and curated☆1,556Updated 2 years ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆546Updated last year
- Plain pytorch implementation of LLaMA☆188Updated 2 years ago
- Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA☆81Updated last year
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆464Updated 2 years ago
- Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.☆471Updated last year
- Fast Inference Solutions for BLOOM☆564Updated 8 months ago
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆409Updated 2 years ago
- A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human…☆216Updated last year
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)☆1,120Updated last year
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆452Updated last year
- Expanding natural instructions☆1,006Updated last year