Vortex: A Flexible and Efficient Sparse Attention Framework
☆49Jan 21, 2026Updated last month
Alternatives and similar repositories for vortex_torch
Users that are interested in vortex_torch are comparing it to the libraries listed below
Sorting:
- Sirius, an efficient correction mechanism, which significantly boosts Contextual Sparsity models on reasoning tasks while maintaining its…☆21Sep 10, 2024Updated last year
- ☆32Jul 2, 2025Updated 8 months ago
- [ICML 2025] LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models☆17Nov 4, 2025Updated 4 months ago
- ☆63Jun 12, 2025Updated 9 months ago
- DeepSeek-V3.2-Exp DSA Warmup Lightning Indexer training operator based on tilelang☆44Nov 19, 2025Updated 4 months ago
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity☆71Mar 10, 2026Updated last week
- flex-block-attn: an efficient block sparse attention computation library☆127Dec 26, 2025Updated 2 months ago
- Notes for the book Fluent Python, 1st Edition (O'Reilly, 2015)☆11Jun 30, 2022Updated 3 years ago
- ☆37Jul 19, 2025Updated 8 months ago
- ☆229Nov 19, 2025Updated 4 months ago
- ☆65Apr 26, 2025Updated 10 months ago
- Memory optimized Mixture of Experts☆75Jul 25, 2025Updated 7 months ago
- Kinetics: Rethinking Test-Time Scaling Laws☆86Jul 11, 2025Updated 8 months ago
- My tests and experiments with some popular dl frameworks.☆17Sep 11, 2025Updated 6 months ago
- Canvas: End-to-End Kernel Architecture Search in Neural Networks☆27Nov 18, 2024Updated last year
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆30Mar 28, 2025Updated 11 months ago
- Tutorial Exercises and Code for GPU Communications Tutorial at HOT Interconnects 2025☆31Oct 22, 2025Updated 4 months ago
- Low overhead tracing library and trace visualizer for pipelined CUDA kernels☆133Nov 26, 2025Updated 3 months ago
- Asynchronous pipeline parallel optimization☆19Feb 2, 2026Updated last month
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Mar 12, 2026Updated last week
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆96Mar 5, 2026Updated 2 weeks ago
- helper functions for processing and integrating visual language information with Qwen-VL Series Model☆17Aug 30, 2024Updated last year
- Training project about Deep Learing☆12Jun 22, 2017Updated 8 years ago
- [COLM 2024] TriForce: Lossless Acceleration of Long Sequence Generation with Hierarchical Speculative Decoding☆277Aug 31, 2024Updated last year
- ☆13Dec 9, 2024Updated last year
- ☆68Updated this week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆144Dec 4, 2024Updated last year
- kernels, of the mega variety☆690Updated this week
- [AAAI26] LongLLaDA: Unlocking Long Context Capabilities in Diffusion LLMs☆53Dec 7, 2025Updated 3 months ago
- ☆41Oct 15, 2025Updated 5 months ago
- A sparse attention kernel supporting mix sparse patterns☆480Jan 18, 2026Updated 2 months ago
- ☆119May 19, 2025Updated 10 months ago
- [ICML 2025 Spotlight] ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference☆283May 1, 2025Updated 10 months ago
- An Open-Source RAG Workload Trace to Optimize RAG Serving Systems☆35Nov 18, 2025Updated 4 months ago
- ☆15Jul 13, 2025Updated 8 months ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆359Nov 20, 2025Updated 3 months ago
- hints for xv6lab in installing and doing☆12Jan 28, 2021Updated 5 years ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Jun 15, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated this week