huggingface / large_language_model_training_playbookLinks
An open collection of implementation tips, tricks and resources for training large language models
☆473Updated 2 years ago
Alternatives and similar repositories for large_language_model_training_playbook
Users that are interested in large_language_model_training_playbook are comparing it to the libraries listed below
Sorting:
- An open collection of methodologies to help with successful training of large language models.☆492Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆254Updated last year
- Scaling Data-Constrained Language Models☆334Updated 8 months ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆463Updated 2 years ago
- Used for adaptive human in the loop evaluation of language and embedding models.☆308Updated 2 years ago
- Expanding natural instructions☆998Updated last year
- Open Instruction Generalist is an assistant trained on massive synthetic instructions to perform many millions of tasks☆208Updated last year
- Crosslingual Generalization through Multitask Finetuning☆535Updated 8 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆997Updated 10 months ago
- Ask Me Anything language model prompting☆546Updated last year
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆313Updated 2 years ago
- Build, evaluate, understand, and fix LLM-based apps☆489Updated last year
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆451Updated last year
- Fast Inference Solutions for BLOOM☆564Updated 7 months ago
- Repo for the Belebele dataset, a massively multilingual reading comprehension dataset.☆329Updated 5 months ago
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆681Updated 2 months ago
- Pipeline for pulling and processing online language model pretraining data from the web☆178Updated last year
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆182Updated 4 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆697Updated last year
- Organize your experiments into discrete steps that can be cached and reused throughout the lifetime of your research project.☆561Updated last year
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆770Updated last year
- Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03…☆537Updated last year
- SGPT: GPT Sentence Embeddings for Semantic Search☆868Updated last year
- batched loras☆343Updated last year
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging F…☆572Updated last year
- Interpretability for sequence generation models 🐛 🔍☆419Updated last month
- Simple next-token-prediction for RLHF☆226Updated last year
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆302Updated last year
- 🤖 A PyTorch library of curated Transformer models and their composable components☆888Updated last year