Multipack distributed sampler for fast padding-free training of LLMs
☆204Aug 10, 2024Updated last year
Alternatives and similar repositories for multipack
Users that are interested in multipack are comparing it to the libraries listed below
Sorting:
- ☆124May 28, 2024Updated last year
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 8 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Mar 31, 2024Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Apr 21, 2024Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆509Oct 19, 2023Updated 2 years ago
- Supervised instruction finetuning for LLM with HF trainer and Deepspeed☆36Jul 6, 2023Updated 2 years ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆280Nov 24, 2025Updated 3 months ago
- Reimplementation of the task generation part from the Alpaca paper☆119Apr 4, 2023Updated 2 years ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,669Apr 17, 2024Updated last year
- A bagel, with everything.☆326Apr 11, 2024Updated last year
- ☆32Jan 1, 2024Updated 2 years ago
- Customizable implementation of the self-instruct paper.☆1,049Mar 7, 2024Updated last year
- batched loras☆349Sep 6, 2023Updated 2 years ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Apr 17, 2024Updated last year
- ☆44Jun 19, 2024Updated last year
- ☆27Aug 30, 2023Updated 2 years ago
- ☆93Jul 5, 2024Updated last year
- QLoRA with Enhanced Multi GPU Support☆38Aug 8, 2023Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Aug 27, 2023Updated 2 years ago
- Convert all of libgen to high quality markdown☆255Dec 13, 2023Updated 2 years ago
- BFloat16 Fused Adam Operator for PyTorch☆16Nov 16, 2024Updated last year
- 🚀 Collection of libraries used with fms-hf-tuning to accelerate fine-tuning and training of large models.☆13Jan 30, 2026Updated 3 weeks ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆11Jul 22, 2023Updated 2 years ago
- Understanding the correlation between different LLM benchmarks☆29Jan 11, 2024Updated 2 years ago
- GPT* - Training faster small transformers using ALiBi, Parallel Residual Connections and more!☆21Oct 29, 2022Updated 3 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Jun 21, 2023Updated 2 years ago
- Minimalistic large language model 3D-parallelism training☆2,569Feb 19, 2026Updated last week
- ☆22Aug 27, 2023Updated 2 years ago
- AdamW optimizer for bfloat16 models in pytorch 🔥.☆39Jun 16, 2024Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆150Jan 7, 2026Updated last month
- ☆16Feb 6, 2024Updated 2 years ago
- ☆18Apr 3, 2023Updated 2 years ago
- Microsoft Automatic Mixed Precision Library☆636Dec 1, 2025Updated 2 months ago
- ☆415Nov 2, 2023Updated 2 years ago
- Utilities for PyTorch distributed☆25Feb 27, 2025Updated 11 months ago
- Tools for merging pretrained large language models.☆6,814Jan 26, 2026Updated last month
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆23Mar 12, 2024Updated last year
- Robust recipes to align language models with human and AI preferences☆5,506Sep 8, 2025Updated 5 months ago