zxytim / arithmetic-encoding-compression
☆11Updated 2 years ago
Alternatives and similar repositories for arithmetic-encoding-compression
Users that are interested in arithmetic-encoding-compression are comparing it to the libraries listed below
Sorting:
- A comprehensive overview of Data Distillation and Condensation (DDC). DDC is a data-centric task where a representative (i.e., small but …☆13Updated 2 years ago
- Benchmark tests supporting the TiledCUDA library.☆16Updated 5 months ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated last year
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆28Updated 2 years ago
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆14Updated 3 months ago
- ☆30Updated 11 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆19Updated 9 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆21Updated 6 months ago
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆11Updated 6 months ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆85Updated 2 years ago
- ☆20Updated 2 months ago
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Updated last year
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆27Updated last year
- Official implementation of "The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs"☆27Updated 3 weeks ago
- ☆37Updated 2 years ago
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated 9 months ago
- An Attention Superoptimizer☆21Updated 3 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆41Updated 2 weeks ago
- Benchmarking Attention Mechanism in Vision Transformers.☆18Updated 2 years ago
- Linear Attention Sequence Parallelism (LASP)☆82Updated 11 months ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆47Updated 5 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆39Updated last year
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆20Updated last year
- The official implementation for Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free☆17Updated this week
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆19Updated 9 months ago
- [ICML‘2024] "LoCoCo: Dropping In Convolutions for Long Context Compression", Ruisi Cai, Yuandong Tian, Zhangyang Wang, Beidi Chen☆16Updated 8 months ago
- triton ver of gqa flash attn, based on the tutorial☆11Updated 9 months ago
- The official code for "Advancing Multimodal Large Language Models with Quantization-Aware Scale Learning for Efficient Adaptation" | [MM2…☆13Updated 5 months ago
- ☆20Updated 11 months ago