huggingface / llm_training_handbookLinks
An open collection of methodologies to help with successful training of large language models.
☆492Updated last year
Alternatives and similar repositories for llm_training_handbook
Users that are interested in llm_training_handbook are comparing it to the libraries listed below
Sorting:
- An open collection of implementation tips, tricks and resources for training large language models☆473Updated 2 years ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆697Updated last year
- batched loras☆343Updated last year
- distributed trainer for LLMs☆575Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆945Updated 7 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆254Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆302Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆812Updated 11 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆547Updated last year
- Scaling Data-Constrained Language Models☆334Updated 8 months ago
- A bagel, with everything.☆320Updated last year
- A joint community effort to create one central leaderboard for LLMs.☆299Updated 9 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆462Updated last year
- Build, evaluate, understand, and fix LLM-based apps☆489Updated last year
- A repository for research on medium sized language models.☆497Updated last month
- Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03…☆537Updated last year
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆769Updated last year
- A collection of awesome-prompt-datasets, awesome-instruction-dataset, to train ChatLLM such as chatgpt 收录各种各样的指令数据集, 用于训练 ChatLLM 模型。☆673Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆612Updated last year
- All available datasets for Instruction Tuning of Large Language Models☆250Updated last year
- LLM Workshop by Sourab Mangrulkar☆381Updated 11 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆727Updated 8 months ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆637Updated 10 months ago
- Official repository for LongChat and LongEval☆518Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆651Updated last year
- LOMO: LOw-Memory Optimization☆986Updated 11 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,249Updated 3 months ago
- ☆536Updated 9 months ago
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)☆1,120Updated last year