ypeleg / llama
User-friendly LLaMA: Train or Run the model using PyTorch. Nothing else.
☆339Updated last year
Alternatives and similar repositories for llama:
Users that are interested in llama are comparing it to the libraries listed below
- ☆456Updated last year
- Repo for fine-tuning Casual LLMs☆455Updated 10 months ago
- ☆536Updated last year
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆350Updated last year
- Tune any FALCON in 4-bit☆466Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆421Updated last year
- All available datasets for Instruction Tuning of Large Language Models☆242Updated last year
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,060Updated 11 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆626Updated last year
- Official repository for LongChat and LongEval☆519Updated 8 months ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆818Updated last year
- Fast Inference Solutions for BLOOM☆563Updated 4 months ago
- Quantized inference code for LLaMA models☆1,052Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆541Updated 11 months ago
- Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03…☆526Updated last year
- ☆1,452Updated last year
- Due to restriction of LLaMA, we try to reimplement BLOOM-LoRA (much less restricted BLOOM license here https://huggingface.co/spaces/bigs…☆185Updated last year
- Salesforce open-source LLMs with 8k sequence length.☆717Updated 2 weeks ago
- Crosslingual Generalization through Multitask Finetuning☆525Updated 4 months ago
- Reverse Instructions to generate instruction tuning data with corpus examples☆208Updated 11 months ago
- Official implementation of our NeurIPS 2023 paper "Augmenting Language Models with Long-Term Memory".☆781Updated 10 months ago
- Alpaca dataset from Stanford, cleaned and curated☆1,537Updated last year
- distributed trainer for LLMs☆557Updated 9 months ago
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆462Updated last year
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆288Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆687Updated 10 months ago
- Chat with Meta's LLaMA models at home made easy☆834Updated last year
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆717Updated 8 months ago
- This project is an attempt to create a common metric to test LLM's for progress in eliminating hallucinations which is the most serious c…☆222Updated last year