☆63Jul 21, 2024Updated last year
Alternatives and similar repositories for Jetfire-INT8Training
Users that are interested in Jetfire-INT8Training are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of "Oscillation-Reduced MXFP4 Training for Vision Transformers" on DeiT Model Pre-training☆36Jun 20, 2025Updated 8 months ago
- Efficient 2:4 sparse training algorithms and implementations☆59Dec 8, 2024Updated last year
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆29Feb 17, 2025Updated last year
- ☆156Jun 22, 2023Updated 2 years ago
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆33Sep 30, 2025Updated 5 months ago
- ☆52Nov 5, 2024Updated last year
- A framework to compare low-bit integer and float-point formats☆66Feb 6, 2026Updated 3 weeks ago
- JAX Scalify: end-to-end scaled arithmetics☆18Oct 30, 2024Updated last year
- The official code for "Advancing Multimodal Large Language Models with Quantization-Aware Scale Learning for Efficient Adaptation" | [MM2…☆14Dec 7, 2024Updated last year
- [ICML 2024] Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆39Feb 4, 2025Updated last year
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆260Aug 9, 2025Updated 6 months ago
- ☆13Oct 13, 2025Updated 4 months ago
- ☆18Apr 16, 2025Updated 10 months ago
- ☆11Sep 20, 2024Updated last year
- ☆85Jan 23, 2025Updated last year
- ☆25Dec 11, 2021Updated 4 years ago
- Supporting code for the blog post on modular manifolds.☆117Sep 26, 2025Updated 5 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- ☆16Dec 9, 2023Updated 2 years ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆106Dec 20, 2024Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆97Sep 19, 2025Updated 5 months ago
- ☆35Dec 22, 2025Updated 2 months ago
- ☆18Dec 2, 2024Updated last year
- ☆18Mar 18, 2024Updated last year
- PyTorch implementation of StableMask (ICML'24)☆15Jun 27, 2024Updated last year
- Low-bit optimizers for PyTorch☆138Oct 9, 2023Updated 2 years ago
- Official PyTorch implementation of "Efficient Latency-Aware CNN Depth Compression via Two-Stage Dynamic Programming" (ICML'23)☆13Jul 11, 2024Updated last year
- ☆17Jun 11, 2025Updated 8 months ago
- Training with Block Minifloat number representation☆18May 2, 2021Updated 4 years ago
- ☆38Jul 16, 2025Updated 7 months ago
- Emulating DMA Engines on GPUs for Performance and Portability☆41May 17, 2015Updated 10 years ago
- ☆169Mar 9, 2023Updated 2 years ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆39Nov 1, 2024Updated last year
- ☆27Mar 29, 2025Updated 11 months ago
- ☆19Dec 31, 2025Updated 2 months ago
- ☆52Dec 10, 2025Updated 2 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆285Oct 19, 2025Updated 4 months ago
- ☆52Jul 18, 2024Updated last year
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year