zipnn / zipnnLinks
A Lossless Compression Library for AI pipelines
☆288Updated 5 months ago
Alternatives and similar repositories for zipnn
Users that are interested in zipnn are comparing it to the libraries listed below
Sorting:
- ☆267Updated last week
- Google TPU optimizations for transformers models☆123Updated 10 months ago
- Scalable and Performant Data Loading☆345Updated last week
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆202Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated last year
- Simple high-throughput inference library☆150Updated 6 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆132Updated last week
- 👷 Build compute kernels☆190Updated this week
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆262Updated this week
- Formatron empowers everyone to control the format of language models' output with minimal overhead.☆231Updated 5 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆210Updated 2 weeks ago
- DeMo: Decoupled Momentum Optimization☆197Updated last year
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching o…☆153Updated 4 months ago
- ☆86Updated 5 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated last year
- Load compute kernels from the Hub☆337Updated last week
- ☆113Updated 2 months ago
- ☆456Updated last week
- ☆47Updated last year
- ☆26Updated this week
- ☆136Updated last year
- Module, Model, and Tensor Serialization/Deserialization☆276Updated 3 months ago
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated last month
- RWKV-7: Surpassing GPT☆101Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆99Updated 6 months ago
- ☆52Updated last year
- Hugging Face Jobs☆19Updated 4 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆327Updated this week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆454Updated 3 weeks ago