readwrite112 / AGAThALinks
PPoPP24 AGAThA: Fast and Efficient GPU Acceleration of Guided Sequence Alignment for Long Read Mapping
☆22Updated last year
Alternatives and similar repositories for AGAThA
Users that are interested in AGAThA are comparing it to the libraries listed below
Sorting:
- ☆28Updated last year
- A benchmark suite to study the performance characteristics of genomics applications☆32Updated last year
- [PACT'24] GraNNDis. A fast and unified distributed graph neural network (GNN) training framework for both full-batch (full-graph) and min…☆10Updated last year
- Artifact for USENIX ATC'23: TC-GNN: Bridging Sparse GNN Computation and Dense Tensor Cores on GPUs.☆52Updated 2 years ago
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆41Updated last year
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆50Updated 5 months ago
- A High-Throughput Multi-GPU System for Graph-Based Approximate Nearest Neighbor Search☆20Updated 5 months ago
- ☆19Updated 7 months ago
- ☆11Updated 8 months ago
- GenStore is the first in-storage processing system designed for genome sequence analysis that greatly reduces both data movement and comp…☆14Updated 3 years ago
- ☆34Updated 3 months ago
- PyGim is the first runtime framework to efficiently execute Graph Neural Networks (GNNs) on real Processing-in-Memory systems. It provide…☆32Updated 8 months ago
- ☆81Updated 7 months ago
- ☆15Updated 4 months ago
- UPMEM LLM Framework allows profiling PyTorch layers and functions and simulate those layers/functions with a given hardware profile.☆37Updated 5 months ago
- ☆58Updated last year
- Artifact for PPoPP22 QGTC: Accelerating Quantized GNN via GPU Tensor Core.☆30Updated 3 years ago
- Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching☆41Updated last year
- A pattern-based algorithmic autotuner for graph processing on GPUs.☆31Updated 6 months ago
- Artifact for PPoPP20 "Understanding and Bridging the Gaps in Current GNN Performance Optimizations"☆40Updated 4 years ago
- WaferLLM: Large Language Model Inference at Wafer Scale☆80Updated 2 months ago
- [SIGMOD 2025] PQCache: Product Quantization-based KVCache for Long Context LLM Inference☆83Updated last month
- Artifact of ASPLOS'23 paper entitled: GRACE: A Scalable Graph-Based Approach to Accelerating Recommendation Model Inference☆19Updated 2 years ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆50Updated last year
- Artifact for OSDI'21 GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs.☆70Updated 2 years ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆56Updated last year
- ☆25Updated 3 years ago
- ☆41Updated 6 months ago
- FGNN's artifact evaluation (EuroSys 2022)☆18Updated 3 years ago
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆34Updated last year