galatolofederico / vanilla-llamaLinks
Plain pytorch implementation of LLaMA
☆188Updated 2 years ago
Alternatives and similar repositories for vanilla-llama
Users that are interested in vanilla-llama are comparing it to the libraries listed below
Sorting:
- ☆458Updated last year
- Fast Inference Solutions for BLOOM☆564Updated 9 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆621Updated last year
- Official repository for LongChat and LongEval☆523Updated last year
- LOMO: LOw-Memory Optimization☆989Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆458Updated last year
- A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human…☆218Updated last year
- User-friendly LLaMA: Train or Run the model using PyTorch. Nothing else.☆338Updated 2 years ago
- ☆543Updated 7 months ago
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆726Updated last year
- Crosslingual Generalization through Multitask Finetuning☆538Updated 9 months ago
- [NIPS2023] RRHF & Wombat☆809Updated last year
- ☆535Updated last year
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆352Updated 2 years ago
- Open-source pre-training implementation of Google's LaMDA in PyTorch. Adding RLHF similar to ChatGPT.☆471Updated last year
- GPTQ inference Triton kernel☆302Updated 2 years ago
- Automatically split your PyTorch models on multiple GPUs for training & inference☆656Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆695Updated 11 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆423Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,402Updated last year
- Rectified Rotary Position Embeddings☆374Updated last year
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆646Updated 6 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆223Updated last year
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,035Updated 9 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆546Updated last year
- Inference code for facebook LLaMA models with Wrapyfi support☆129Updated 2 years ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆412Updated 2 years ago
- ☆547Updated last year
- Official PyTorch implementation of QA-LoRA☆138Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆388Updated last year