Why Low-Precision Transformer Training Fails: An Analysis on Flash Attention
☆65Apr 7, 2026Updated last month
Alternatives and similar repositories for why-low-precision-training-fails
Users that are interested in why-low-precision-training-fails are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆49May 20, 2025Updated 11 months ago
- ☆14Jul 25, 2024Updated last year
- This repository serves as a collection of scrapers procuring and structuring various legal datasets☆19Jun 16, 2023Updated 2 years ago
- Post processing library used to analyze memory snapshots☆29Apr 8, 2026Updated last month
- ☆17May 14, 2020Updated 5 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Complete simulation of IEEE 754 fixed and floating point specification to any precision☆12Aug 26, 2020Updated 5 years ago
- codes and plots for "Active-Dormant Attention Heads: Mechanistically Demystifying Extreme-Token Phenomena in LLMs"☆11Dec 30, 2024Updated last year
- Supporting code for "LLMs for your iPhone: Whole-Tensor 4 Bit Quantization"☆11Mar 31, 2024Updated 2 years ago
- [ICML 2024] Sparse Model Inversion: Efficient Inversion of Vision Transformers with Less Hallucination☆14Apr 29, 2025Updated last year
- ☆13Nov 27, 2025Updated 5 months ago
- ☆17Apr 30, 2025Updated last year
- AiTer Optimized Model☆71Updated this week
- JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning☆10Nov 3, 2024Updated last year
- Graph model execution API for Candle☆17Jul 27, 2025Updated 9 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆11Apr 5, 2023Updated 3 years ago
- SMART introduces a novel test-time framework where Small Language Models (SLMs) reason step-by-step, and Large Language Models (LLMs) pro…☆12Jul 9, 2025Updated 10 months ago
- [CVPR 2025] QuartDepth☆17Mar 24, 2025Updated last year
- Train small sequence models in your browser with WebGPU.☆34Dec 3, 2025Updated 5 months ago
- A fast, simple, multi-threaded string interning library.☆18Jul 11, 2025Updated 9 months ago
- A compiler of Decaf(an object-oriented compiler)☆12Sep 26, 2017Updated 8 years ago
- Source code of our TNNLS paper "Boosting Convolutional Neural Networks with Middle Spectrum Grouped Convolution"☆12Apr 14, 2023Updated 3 years ago
- Official implementation for the paper "Controlled Sparsity via Constrained Optimization"☆12Aug 10, 2022Updated 3 years ago
- ☆14Jul 14, 2025Updated 9 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆15Apr 11, 2024Updated 2 years ago
- Code for Spectral Norm of Convolutional Layers with Circular and Zero Paddings and Efficient Bound of Lipschitz Constant for Convolutiona…☆15Feb 2, 2024Updated 2 years ago
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- Try to export the ONNX QDQ model that conforms to the AXERA NPU quantization specification. Currently, only w8a8 is supported.☆11Sep 10, 2024Updated last year
- ☆10Apr 24, 2024Updated 2 years ago
- ☆28Apr 30, 2026Updated last week
- CUDA keyring packaging for Debian☆14Apr 14, 2023Updated 3 years ago
- ☆17Mar 10, 2025Updated last year
- The official code for "Advancing Multimodal Large Language Models with Quantization-Aware Scale Learning for Efficient Adaptation" | [MM2…☆14Dec 7, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆23Dec 16, 2025Updated 4 months ago
- Demo async runtime☆10Mar 7, 2022Updated 4 years ago
- ☆15Jul 2, 2024Updated last year
- a GGUF file parser☆17Apr 27, 2026Updated last week
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- ☆11Sep 20, 2024Updated last year
- Fork of Flame repo for training of some new stuff in development☆19Apr 24, 2026Updated 2 weeks ago