HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. π The official implementation of https://arxiv.org/abs/2501.02625
β29Feb 17, 2025Updated last year
Alternatives and similar repositories for HALO
Users that are interested in HALO are comparing it to the libraries listed below
Sorting:
- β119Feb 17, 2026Updated last month
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsityβ71Mar 10, 2026Updated last week
- β63Jul 21, 2024Updated last year
- Pytorch implementation of "Oscillation-Reduced MXFP4 Training for Vision Transformers" on DeiT Model Pre-trainingβ36Jun 20, 2025Updated 9 months ago
- Explore training for quantized modelsβ26Jul 12, 2025Updated 8 months ago
- IntLLaMA: A fast and light quantization solution for LLaMAβ18Jul 21, 2023Updated 2 years ago
- Implementation of NM sparsity recipe presented in the paper "Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers".β11Feb 5, 2024Updated 2 years ago
- β20Nov 26, 2025Updated 3 months ago
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.β81Dec 18, 2025Updated 3 months ago
- β34Jul 15, 2021Updated 4 years ago
- Official implementation of Bayes Conditional Distribution Estimation for Knowledge Distillation Based on Conditional Mutual Informationβ11Sep 28, 2023Updated 2 years ago
- β35Dec 22, 2025Updated 2 months ago
- β157Jun 22, 2023Updated 2 years ago
- β11Dec 8, 2022Updated 3 years ago
- β14May 3, 2024Updated last year
- Source code of "Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers" EMNLP 2025β17Jan 12, 2026Updated 2 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.β128Jul 13, 2024Updated last year
- Official Code of The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks[ICML2022]β17Sep 20, 2022Updated 3 years ago
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decompositionβ18Apr 16, 2025Updated 11 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.β45Jun 11, 2025Updated 9 months ago
- Optimization algorithm which fits a ResNet to CIFAR-10 5x faster than SGD / Adam (with terrible generalization)β14Oct 20, 2023Updated 2 years ago
- Code for the API, workload execution, and agents underlying the LLMail-Inject Adpative Prompt Injection Challengeβ21Mar 1, 2026Updated 2 weeks ago
- β17Dec 7, 2025Updated 3 months ago
- This repository contains code for the MicroAdam paper.β21Dec 14, 2024Updated last year
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inferenceβ376Jul 10, 2025Updated 8 months ago
- β29Nov 19, 2025Updated 4 months ago
- β11Jan 13, 2026Updated 2 months ago
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressiβ¦β23Oct 1, 2025Updated 5 months ago
- β27Mar 29, 2025Updated 11 months ago
- β27Nov 25, 2025Updated 3 months ago
- An extention to the GaLore paper, to perform Natural Gradient Descent in low rank subspaceβ18Oct 21, 2024Updated last year
- Training with Block Minifloat number representationβ18May 2, 2021Updated 4 years ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMsβ389Apr 13, 2025Updated 11 months ago
- β18Apr 16, 2025Updated 11 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Servingβ335Jul 2, 2024Updated last year
- Code for AAAI 2024 paper: CR-SAM: Curvature Regularized Sharpness-Aware Minimizationβ12Nov 29, 2024Updated last year
- ClockBench - Visual Reasoning AI Benchmarkβ31Sep 4, 2025Updated 6 months ago
- A selective knowledge distillation algorithm for efficient speculative decodersβ36Nov 27, 2025Updated 3 months ago
- β13Jan 15, 2025Updated last year