zxytim / arithmetic-encoding-compressionLinks
☆11Updated 2 years ago
Alternatives and similar repositories for arithmetic-encoding-compression
Users that are interested in arithmetic-encoding-compression are comparing it to the libraries listed below
Sorting:
- Benchmark tests supporting the TiledCUDA library.☆17Updated 10 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆19Updated last year
- A comprehensive overview of Data Distillation and Condensation (DDC). DDC is a data-centric task where a representative (i.e., small but …☆13Updated 2 years ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 2 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆29Updated last month
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated 2 years ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Updated 7 months ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆86Updated 2 years ago
- ACL 2023☆39Updated 2 years ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆54Updated 10 months ago
- Linear Attention Sequence Parallelism (LASP)☆87Updated last year
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Updated 2 years ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆44Updated 2 years ago
- Transformers components but in Triton☆34Updated 5 months ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆30Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆43Updated 3 months ago
- TensorRT LLM Benchmark Configuration☆13Updated last year
- Repository for the COLM 2025 paper SpecDec++: Boosting Speculative Decoding via Adaptive Candidate Lengths☆12Updated 3 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆77Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- Low-Rank Llama Custom Training☆23Updated last year
- ☆32Updated last year
- ☆14Updated last year
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆27Updated 2 years ago
- Open-sourcing code associated with the AAAI-25 paper "On the Expressiveness and Length Generalization of Selective State-Space Models on …☆15Updated 3 weeks ago
- triton ver of gqa flash attn, based on the tutorial☆12Updated last year
- ☆61Updated last year
- Measuring the Signal to Noise Ratio in Language Model Evaluation☆23Updated last month
- A simple implementation of [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://arxiv.org/abs/2312.00752)☆22Updated last year