β63Jul 21, 2024Updated last year
Alternatives and similar repositories for Jetfire-INT8Training
Users that are interested in Jetfire-INT8Training are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. π The official implementation of https://arxβ¦β29Feb 17, 2025Updated last year
- Pytorch implementation of "Oscillation-Reduced MXFP4 Training for Vision Transformers" on DeiT Model Pre-trainingβ37Jun 20, 2025Updated 9 months ago
- β157Jun 22, 2023Updated 2 years ago
- β27Mar 29, 2025Updated 11 months ago
- Efficient 2:4 sparse training algorithms and implementationsβ59Dec 8, 2024Updated last year
- The official code for "Advancing Multimodal Large Language Models with Quantization-Aware Scale Learning for Efficient Adaptation" | [MM2β¦β14Dec 7, 2024Updated last year
- A framework to compare low-bit integer and float-point formatsβ71Feb 6, 2026Updated last month
- β87Jan 23, 2025Updated last year
- [ICML 2024] Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocksβ39Feb 4, 2025Updated last year
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Modelsβ35Jun 12, 2024Updated last year
- β13Oct 13, 2025Updated 5 months ago
- β52Nov 5, 2024Updated last year
- β35Dec 22, 2025Updated 3 months ago
- continous batching and parallel acceleration for RWKV6β22Jun 28, 2024Updated last year
- β25Dec 11, 2021Updated 4 years ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Trainingβ262Aug 9, 2025Updated 7 months ago
- JAX Scalify: end-to-end scaled arithmeticsβ18Oct 30, 2024Updated last year
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.β109Dec 20, 2024Updated last year
- β53Jul 18, 2024Updated last year
- Low-bit optimizers for PyTorchβ138Oct 9, 2023Updated 2 years ago
- β16Dec 9, 2023Updated 2 years ago
- β169Mar 9, 2023Updated 3 years ago
- [COLM 2025] DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation; η₯δΉοΌhttps://zhuanlan.zhihu.cβ¦β29Mar 5, 2025Updated last year
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)β35Sep 30, 2025Updated 5 months ago
- β54Dec 10, 2025Updated 3 months ago
- β11Sep 20, 2024Updated last year
- A Winograd Minimal Filter Implementation in CUDAβ28Aug 25, 2021Updated 4 years ago
- Fast Hadamard transform in CUDA, with a PyTorch interfaceβ293Mar 10, 2026Updated last week
- Emulating DMA Engines on GPUs for Performance and Portabilityβ41May 17, 2015Updated 10 years ago
- β18Apr 16, 2025Updated 11 months ago
- Sequence-level 1F1B schedule for LLMs.β38Aug 26, 2025Updated 6 months ago
- Microsoft Automatic Mixed Precision Libraryβ635Dec 1, 2025Updated 3 months ago
- Framework to reduce autotune overhead to zero for well known deployments.β97Sep 19, 2025Updated 6 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Servingβ336Jul 2, 2024Updated last year
- This repository contains the experimental PyTorch native float8 training UXβ226Aug 1, 2024Updated last year
- Boosting GPU utilization for LLM serving via dynamic spatial-temporal prefill & decode orchestrationβ37Jan 8, 2026Updated 2 months ago
- Official PyTorch implementation of "Efficient Latency-Aware CNN Depth Compression via Two-Stage Dynamic Programming" (ICML'23)β13Jul 11, 2024Updated last year
- β19Dec 24, 2024Updated last year
- Slowdown prediction module of Echo: Simulating Distributed Training at Scaleβ13May 17, 2025Updated 10 months ago