Multipack distributed sampler for fast padding-free training of LLMs
☆208Aug 10, 2024Updated last year
Alternatives and similar repositories for multipack
Users that are interested in multipack are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆16Feb 6, 2024Updated 2 years ago
- BFloat16 Fused Adam Operator for PyTorch☆19Nov 16, 2024Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆75Aug 2, 2024Updated last year
- ☆124May 28, 2024Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Mar 31, 2024Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Reimplementation of the task generation part from the Alpaca paper☆119Apr 4, 2023Updated 3 years ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆474Apr 21, 2024Updated 2 years ago
- Generate textbook-quality synthetic LLM pretraining data☆509Oct 19, 2023Updated 2 years ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆285Nov 24, 2025Updated 5 months ago
- QLoRA with Enhanced Multi GPU Support☆38Aug 8, 2023Updated 2 years ago
- A fork of the PEFT library, supporting Robust Adaptation (RoSA)☆15Aug 16, 2024Updated last year
- Supervised instruction finetuning for LLM with HF trainer and Deepspeed☆37Jul 6, 2023Updated 2 years ago
- batched loras☆351Sep 6, 2023Updated 2 years ago
- One RL Platform is all you need -- Event-driven fully distributed reinforcement learning framework☆21Oct 25, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- QLoRA: Efficient Finetuning of Quantized LLMs☆11Jul 22, 2023Updated 2 years ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,708Apr 17, 2024Updated 2 years ago
- A bagel, with everything.☆326Apr 11, 2024Updated 2 years ago
- Customizable implementation of the self-instruct paper.☆1,053Mar 7, 2024Updated 2 years ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Sep 10, 2023Updated 2 years ago
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 10 months ago
- ☆93Jul 5, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,663Apr 7, 2026Updated 3 weeks ago
- ☆28Aug 30, 2023Updated 2 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- ☆32Jan 1, 2024Updated 2 years ago
- Understanding the correlation between different LLM benchmarks☆29Jan 11, 2024Updated 2 years ago
- ☆415Nov 2, 2023Updated 2 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Oct 1, 2025Updated 6 months ago
- ☆74Sep 5, 2023Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Aug 27, 2023Updated 2 years ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Apr 17, 2024Updated 2 years ago
- GPT* - Training faster small transformers using ALiBi, Parallel Residual Connections and more!☆20Oct 29, 2022Updated 3 years ago
- Convert all of libgen to high quality markdown☆255Dec 13, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆42Mar 13, 2023Updated 3 years ago
- ☆18Apr 3, 2023Updated 3 years ago
- ☆45Jun 19, 2024Updated last year
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- Modeling code for a BitNet b1.58 Llama-style model.☆25Apr 30, 2024Updated 2 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Jun 21, 2023Updated 2 years ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆150Jan 7, 2026Updated 3 months ago