HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. π The official implementation of https://arxiv.org/abs/2501.02625
β28Feb 17, 2025Updated last year
Alternatives and similar repositories for HALO
Users that are interested in HALO are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- β129Feb 17, 2026Updated 2 months ago
- β63Jul 21, 2024Updated last year
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsityβ75Mar 10, 2026Updated last month
- Pytorch implementation of "Oscillation-Reduced MXFP4 Training for Vision Transformers" on DeiT Model Pre-trainingβ39Jun 20, 2025Updated 10 months ago
- Explore training for quantized modelsβ26Jul 12, 2025Updated 9 months ago
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- IntLLaMA: A fast and light quantization solution for LLaMAβ18Jul 21, 2023Updated 2 years ago
- Implementation of NM sparsity recipe presented in the paper "Progressive Gradient Flow for Robust N:M Sparsity Training in Transformers".β11Feb 5, 2024Updated 2 years ago
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.β86Dec 18, 2025Updated 4 months ago
- β34Jul 15, 2021Updated 4 years ago
- Official implementation of Bayes Conditional Distribution Estimation for Knowledge Distillation Based on Conditional Mutual Informationβ11Sep 28, 2023Updated 2 years ago
- β35Dec 22, 2025Updated 4 months ago
- β157Jun 22, 2023Updated 2 years ago
- β11Dec 8, 2022Updated 3 years ago
- β15May 3, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Source code of "Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers" EMNLP 2025β17Jan 12, 2026Updated 3 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.β128Jul 13, 2024Updated last year
- Official Code of The Combinatorial Brain Surgeon: Pruning Weights That Cancel One Another in Neural Networks[ICML2022]β16Sep 20, 2022Updated 3 years ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.β46Jun 11, 2025Updated 10 months ago
- Optimization algorithm which fits a ResNet to CIFAR-10 5x faster than SGD / Adam (with terrible generalization)β14Oct 20, 2023Updated 2 years ago
- β22Nov 26, 2025Updated 5 months ago
- β17Dec 7, 2025Updated 4 months ago
- Github Repo for OATS: Outlier-Aware Pruning through Sparse and Low Rank Decompositionβ21Apr 16, 2025Updated last year
- This repository contains code for the MicroAdam paper.β21Dec 14, 2024Updated last year
- Serverless GPU API endpoints on Runpod - Get Bonus Credits β’ AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- A collection of specialized agent skills for AI infrastructure development, enabling Claude Code to write, optimize, and debug high-perfoβ¦β121Apr 15, 2026Updated 2 weeks ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inferenceβ381Jul 10, 2025Updated 9 months ago
- Code for the API, workload execution, and agents underlying the LLMail-Inject Adpative Prompt Injection Challengeβ22Apr 9, 2026Updated 2 weeks ago
- Official repository Flash Local Linear Attentionβ23Updated this week
- β33Nov 19, 2025Updated 5 months ago
- β11Jan 13, 2026Updated 3 months ago
- β27Mar 29, 2025Updated last year
- Training with Block Minifloat number representationβ18May 2, 2021Updated 4 years ago
- An extention to the GaLore paper, to perform Natural Gradient Descent in low rank subspaceβ18Oct 21, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer β’ AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Fast Matrix Multiplications for Lookup Table-Quantized LLMsβ391Apr 13, 2025Updated last year
- β27Nov 25, 2025Updated 5 months ago
- β19Apr 16, 2025Updated last year
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Servingβ337Jul 2, 2024Updated last year
- Code for AAAI 2024 paper: CR-SAM: Curvature Regularized Sharpness-Aware Minimizationβ12Nov 29, 2024Updated last year
- β13Apr 1, 2026Updated 3 weeks ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantizationβ39Sep 24, 2024Updated last year