linkedin / QuantEaseLinks
QuantEase, a layer-wise quantization framework, frames the problem as discrete-structured non-convex optimization. Our work leverages Coordinate Descent techniques, offering high-quality solutions without the need for matrix inversion or decomposition.
☆19Updated last year
Alternatives and similar repositories for QuantEase
Users that are interested in QuantEase are comparing it to the libraries listed below
Sorting:
- Building blocks for foundation models.☆599Updated 2 years ago
- ☆562Updated last year
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆829Updated 6 months ago
- Minimalistic large language model 3D-parallelism training☆2,529Updated last month
- What would you do with 1000 H100s...☆1,151Updated 2 years ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,859Updated last week
- Puzzles for learning Triton☆2,283Updated last year
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆547Updated 3 weeks ago
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆404Updated last month
- Large Context Attention☆766Updated 3 months ago
- Tile primitives for speedy kernels☆3,120Updated this week
- Open weights language model from Google DeepMind, based on Griffin.☆661Updated 2 weeks ago
- Serving multiple LoRA finetuned LLM as one☆1,140Updated last year
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆693Updated last week
- GPU programming related news and material links☆1,955Updated 4 months ago
- An ML Systems Onboarding list☆981Updated last year
- Recipes to scale inference-time compute of open models☆1,124Updated 8 months ago
- LLM KV cache compression made easy☆866Updated last week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Updated 5 months ago
- Scalable data pre processing and curation toolkit for LLMs☆1,377Updated last week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆2,058Updated 5 months ago
- [ACL 2025] Official implementation of the "CoT-ICL Lab" framework☆11Updated 3 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark + Toolkit with Torch -> CUDA (+ more DSLs)☆792Updated 2 weeks ago
- ☆867Updated 2 years ago
- Pipeline Parallelism for PyTorch☆784Updated last year
- A low-latency prediction-serving system☆1,422Updated 4 years ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,316Updated 11 months ago
- Best practices for distilling large language models.☆604Updated 2 years ago
- ☆957Updated 3 months ago
- Slides, notes, and materials for the workshop☆339Updated last year