Lightning-Universe / lightning-HivemindLinks
Lightning Training strategy for HiveMind
☆18Updated 2 weeks ago
Alternatives and similar repositories for lightning-Hivemind
Users that are interested in lightning-Hivemind are comparing it to the libraries listed below
Sorting:
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- Example of applying CUDA graphs to LLaMA-v2☆12Updated 2 years ago
- ☆74Updated 5 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆57Updated this week
- Experiment of using Tangent to autodiff triton☆81Updated last year
- ☆111Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on disk☆161Updated this week
- QuIP quantization☆59Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- A bunch of kernels that might make stuff slower 😉☆59Updated this week
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆146Updated last year
- A library for unit scaling in PyTorch☆130Updated 2 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆161Updated 2 months ago
- Official code for "SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient"☆143Updated last year
- How to ensure correctness and ship LLM generated kernels in PyTorch☆58Updated this week
- The evaluation framework for training-free sparse attention in LLMs☆96Updated 3 months ago
- Collection of kernels written in Triton language☆154Updated 5 months ago
- Code for studying the super weight in LLM☆117Updated 9 months ago
- ☆94Updated 3 weeks ago
- ☆159Updated 2 years ago
- Make triton easier☆47Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆26Updated 2 years ago
- train with kittens!☆62Updated 10 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆240Updated 3 weeks ago
- Official implementation for Training LLMs with MXFP4☆91Updated 4 months ago
- ☆28Updated 8 months ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- Work in progress.☆72Updated 2 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆212Updated this week