An Attention Superoptimizer
☆22Jan 20, 2025Updated last year
Alternatives and similar repositories for attention_superoptimizer
Users that are interested in attention_superoptimizer are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Repository to go along with the paper "Plumber: Diagnosing and Removing Performance Bottlenecks in Machine Learning Data Pipelines"☆10Mar 31, 2022Updated 4 years ago
- An experimental parallel training platform☆57Mar 25, 2024Updated 2 years ago
- ☆17May 10, 2024Updated last year
- A research group at UCSD CSE focused on Advanced Data Analytics: data management and systems for ML/AI and data science.☆11Feb 27, 2026Updated last month
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆213Sep 21, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Artifacts for our SIGCOMM'23 paper Ditto☆15Oct 17, 2023Updated 2 years ago
- sgx-based encrypted deduplication prototype☆13May 14, 2021Updated 4 years ago
- Matrix multiplication on GPUs for matrices stored on a CPU. Similar to cublasXt, but ported to both NVIDIA and AMD GPUs.☆32Apr 2, 2025Updated last year
- Artifacts for SOSP'19 paper Optimizing Deep Learning Computation with Automatic Generation of Graph Substitutions☆21Apr 15, 2022Updated 4 years ago
- nnScaler: Compiling DNN models for Parallel Training☆127Apr 8, 2026Updated last week
- ☆13Jan 23, 2021Updated 5 years ago
- Might be a graph storage engine. (WIP)☆13May 14, 2023Updated 2 years ago
- A schedule language for large model training☆152Aug 21, 2025Updated 7 months ago
- ☆44Sep 6, 2021Updated 4 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Mamba support for transformer lens☆19Sep 17, 2024Updated last year
- Automatic resource configuration for serverless workflows.☆21Mar 24, 2024Updated 2 years ago
- ☆23Mar 7, 2025Updated last year
- ☆81Sep 15, 2025Updated 7 months ago
- A source-to-source compiler for optimizing CUDA dynamic parallelism by aggregating launches☆15Jun 21, 2019Updated 6 years ago
- ☆20Sep 28, 2024Updated last year
- Data System for Optimized Deep Learning Model Selection☆21Nov 17, 2022Updated 3 years ago
- Source code for QuickSel (SIGMOD 2020)☆19Jul 12, 2025Updated 9 months ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆94Jul 14, 2023Updated 2 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [COLM 2024] SKVQ: Sliding-window Key and Value Cache Quantization for Large Language Models☆24Oct 5, 2024Updated last year
- MLIR tools and dialect for GraphBLAS☆18Mar 30, 2022Updated 4 years ago
- Source code for OSDI 2023 paper titled "Cilantro - Performance-Aware Resource Allocation for General Objectives via Online Feedback"☆40Jul 6, 2023Updated 2 years ago
- SpotServe: Serving Generative Large Language Models on Preemptible Instances☆134Feb 22, 2024Updated 2 years ago
- 编译原理课程实践中用于测试的代码☆10Jun 9, 2021Updated 4 years ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆240Sep 24, 2023Updated 2 years ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆323Jun 10, 2025Updated 10 months ago
- ☆15Apr 20, 2022Updated 3 years ago
- ☆30Oct 3, 2022Updated 3 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆27Aug 31, 2023Updated 2 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆45Feb 27, 2025Updated last year
- THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE.☆11May 6, 2023Updated 2 years ago
- ☆28Aug 14, 2024Updated last year
- Benchmark for matrix multiplications between dense and block sparse (BSR) matrix in TVM, blocksparse (Gray et al.) and cuSparse.☆23Aug 21, 2020Updated 5 years ago
- ☆49Apr 11, 2025Updated last year
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆199Apr 12, 2026Updated last week