CerebrasResearch / Sparse-IFT
Official repository of Sparse ISO-FLOP Transformations for Maximizing Training Efficiency
☆25Updated 5 months ago
Alternatives and similar repositories for Sparse-IFT:
Users that are interested in Sparse-IFT are comparing it to the libraries listed below
- Experiment of using Tangent to autodiff triton☆74Updated 11 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆111Updated last month
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆55Updated last month
- Fast and memory-efficient exact attention☆52Updated last month
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆67Updated 7 months ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆106Updated 5 months ago
- ☆124Updated 11 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆102Updated last month
- some common Huggingface transformers in maximal update parametrization (µP)☆78Updated 2 years ago
- Utilities for Training Very Large Models☆57Updated 3 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆66Updated 9 months ago
- Understand and test language model architectures on synthetic tasks.☆175Updated this week
- ☆43Updated 2 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆81Updated last year
- ☆37Updated 9 months ago
- Token Omission Via Attention☆122Updated 3 months ago
- ☆83Updated 7 months ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated 9 months ago
- ☆74Updated last year
- Make triton easier☆42Updated 7 months ago
- Code for studying the super weight in LLM☆68Updated last month
- ☆75Updated 6 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆192Updated last month
- Triton Implementation of HyperAttention Algorithm☆46Updated last year
- PyTorch building blocks for OLMo☆47Updated this week
- ML/DL Math and Method notes☆57Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆58Updated 3 months ago
- seqax = sequence modeling + JAX☆136Updated 6 months ago
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆90Updated last year