linkedin / QuantEaseLinks
QuantEase, a layer-wise quantization framework, frames the problem as discrete-structured non-convex optimization. Our work leverages Coordinate Descent techniques, offering high-quality solutions without the need for matrix inversion or decomposition.
☆19Updated last year
Alternatives and similar repositories for QuantEase
Users that are interested in QuantEase are comparing it to the libraries listed below
Sorting:
- Minimalistic large language model 3D-parallelism training☆2,529Updated last month
- Building blocks for foundation models.☆599Updated 2 years ago
- ☆562Updated last year
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆829Updated 6 months ago
- Serving multiple LoRA finetuned LLM as one☆1,140Updated last year
- PyTorch native quantization and sparsity for training and inference☆2,657Updated last week
- Open weights language model from Google DeepMind, based on Griffin.☆663Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆2,058Updated 5 months ago
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆547Updated 3 weeks ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆912Updated last month
- What would you do with 1000 H100s...☆1,151Updated 2 years ago
- [ACL 2025] Official implementation of the "CoT-ICL Lab" framework☆11Updated 3 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Updated 5 months ago
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆404Updated last month
- Scalable data pre processing and curation toolkit for LLMs☆1,391Updated this week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆475Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆792Updated 2 weeks ago
- LLM KV cache compression made easy☆876Updated last week
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,439Updated this week
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,859Updated last week
- Puzzles for learning Triton☆2,283Updated last year
- The tool facilitates debugging convergence issues and testing new algorithms and recipes for training LLMs using Nvidia libraries such as…☆18Updated 4 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,067Updated last year
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,897Updated 2 years ago
- Pipeline Parallelism for PyTorch☆784Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,316Updated 11 months ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,525Updated this week
- Large Context Attention☆766Updated 3 months ago
- ☆957Updated 3 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,005Updated last year