ypeleg / llama
User-friendly LLaMA: Train or Run the model using PyTorch. Nothing else.
☆328Updated last year
Related projects ⓘ
Alternatives and complementary repositories for llama
- ☆454Updated last year
- Repo for fine-tuning Casual LLMs☆449Updated 7 months ago
- Tune any FALCON in 4-bit☆468Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆435Updated 6 months ago
- ☆534Updated 11 months ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆812Updated last year
- Scripts for fine-tuning Llama2 via SFT and DPO.☆182Updated last year
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆348Updated last year
- Crosslingual Generalization through Multitask Finetuning☆516Updated last month
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆457Updated 2 years ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆623Updated 9 months ago
- Instruct-tune Open LLaMA / RedPajama / StableLM models on consumer hardware using QLoRA☆80Updated 11 months ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆431Updated last year
- Alpaca dataset from Stanford, cleaned and curated☆1,519Updated last year
- This project is an attempt to create a common metric to test LLM's for progress in eliminating hallucinations which is the most serious c…☆221Updated last year
- A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human…☆208Updated 6 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆528Updated 8 months ago
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆970Updated 3 months ago
- Plain pytorch implementation of LLaMA☆189Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆415Updated 11 months ago
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpe…☆432Updated last year
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆287Updated 11 months ago
- Expanding natural instructions☆959Updated 11 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆707Updated 5 months ago
- Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03…☆517Updated 11 months ago
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆183Updated last year
- ☆263Updated last year
- Reverse Instructions to generate instruction tuning data with corpus examples☆206Updated 8 months ago
- DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection and Instruction-Aware Models for Conversational AI☆478Updated 6 months ago
- A minimum example of aligning language models with RLHF similar to ChatGPT☆214Updated last year