Multipack distributed sampler for fast padding-free training of LLMs
☆206Aug 10, 2024Updated last year
Alternatives and similar repositories for multipack
Users that are interested in multipack are comparing it to the libraries listed below
Sorting:
- ☆16Feb 6, 2024Updated 2 years ago
- BFloat16 Fused Adam Operator for PyTorch☆17Nov 16, 2024Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- ☆124May 28, 2024Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Mar 31, 2024Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Apr 4, 2023Updated 2 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆474Apr 21, 2024Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆509Oct 19, 2023Updated 2 years ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆282Nov 24, 2025Updated 3 months ago
- QLoRA with Enhanced Multi GPU Support☆38Aug 8, 2023Updated 2 years ago
- A fork of the PEFT library, supporting Robust Adaptation (RoSA)☆15Aug 16, 2024Updated last year
- Supervised instruction finetuning for LLM with HF trainer and Deepspeed☆36Jul 6, 2023Updated 2 years ago
- batched loras☆351Sep 6, 2023Updated 2 years ago
- One RL Platform is all you need -- Event-driven fully distributed reinforcement learning framework☆21Oct 25, 2023Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆11Jul 22, 2023Updated 2 years ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,685Apr 17, 2024Updated last year
- A bagel, with everything.☆326Apr 11, 2024Updated last year
- Customizable implementation of the self-instruct paper.☆1,050Mar 7, 2024Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Sep 10, 2023Updated 2 years ago
- ☆93Jul 5, 2024Updated last year
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 9 months ago
- Minimalistic large language model 3D-parallelism training☆2,609Feb 19, 2026Updated last month
- ☆32Jan 1, 2024Updated 2 years ago
- ☆27Aug 30, 2023Updated 2 years ago
- Understanding the correlation between different LLM benchmarks☆29Jan 11, 2024Updated 2 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Oct 1, 2025Updated 5 months ago
- ☆415Nov 2, 2023Updated 2 years ago
- ☆74Sep 5, 2023Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Aug 27, 2023Updated 2 years ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Apr 17, 2024Updated last year
- GPT* - Training faster small transformers using ALiBi, Parallel Residual Connections and more!☆21Oct 29, 2022Updated 3 years ago
- Convert all of libgen to high quality markdown☆255Dec 13, 2023Updated 2 years ago
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆42Mar 13, 2023Updated 3 years ago
- ☆18Apr 3, 2023Updated 2 years ago
- ☆44Jun 19, 2024Updated last year
- Modeling code for a BitNet b1.58 Llama-style model.☆25Apr 30, 2024Updated last year
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Jun 21, 2023Updated 2 years ago
- Tools for merging pretrained large language models.☆6,867Updated this week