zxytim / arithmetic-encoding-compressionLinks
☆11Updated 2 years ago
Alternatives and similar repositories for arithmetic-encoding-compression
Users that are interested in arithmetic-encoding-compression are comparing it to the libraries listed below
Sorting:
- Benchmark tests supporting the TiledCUDA library.☆17Updated 8 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆19Updated last year
- A comprehensive overview of Data Distillation and Condensation (DDC). DDC is a data-centric task where a representative (i.e., small but …☆13Updated 2 years ago
- ☆44Updated 2 weeks ago
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆28Updated 2 years ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆50Updated 8 months ago
- ACL 2023☆39Updated 2 years ago
- ☆20Updated 2 years ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 3 weeks ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆86Updated 2 years ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆29Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆43Updated last month
- ☆19Updated 7 months ago
- FlexAttention w/ FlashAttention3 Support☆27Updated 10 months ago
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆15Updated 5 months ago
- TensorRT LLM Benchmark Configuration☆13Updated last year
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 4 years ago
- Odysseus: Playground of LLM Sequence Parallelism☆75Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated 2 years ago
- Transformers components but in Triton☆34Updated 3 months ago
- ☆32Updated last year
- Linear Attention Sequence Parallelism (LASP)☆85Updated last year
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Updated last year
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆61Updated 3 weeks ago
- Learning Accurate Decision Trees with Bandit Feedback via Quantized Gradient Descent