rapidsai / distributed-join
☆20Updated 3 years ago
Alternatives and similar repositories for distributed-join:
Users that are interested in distributed-join are comparing it to the libraries listed below
- Python bindings for UCX☆125Updated last week
- Lightning In-Memory Object Store☆44Updated 3 years ago
- RAPIDS GPU-BDB☆108Updated 11 months ago
- GPUDirect Async support for IB Verbs☆100Updated 2 years ago
- pytorch ucc plugin☆18Updated 3 years ago
- Linear algebra subroutines for large SSD-resident dense and sparse matrices☆27Updated 4 years ago
- gossip: Efficient Communication Primitives for Multi-GPU Systems☆58Updated 2 years ago
- A GPU-Accelerated In-Memory Key-Value Store (AWS-focused fork)☆28Updated 7 years ago
- A Micro-benchmarking Tool for HPC Networks☆25Updated last month
- GPU library for writing SQL queries☆70Updated 8 months ago
- High Performance Network Library for RDMA☆27Updated 2 years ago
- OFI Programmer's Guide☆52Updated 2 years ago
- A Distributed Multi-GPU System for Fast Graph Processing☆65Updated 6 years ago
- TLB Benchmarks☆33Updated 7 years ago
- Benchmarking In-Memory Index Structures☆26Updated 6 years ago
- ☆14Updated 5 years ago
- A collective communication library plugined into Hadoop☆23Updated 2 years ago
- A multi-level dataflow tracer for capturing I/O calls from workflows.☆15Updated last week
- Code for paper "Engineering a High-Performance GPU B-Tree" accepted to PPoPP 2019☆55Updated 2 years ago
- Fast I/O plugins for Spark☆41Updated 4 years ago
- ☆23Updated 3 years ago
- Asynchronous Multi-GPU Programming Framework☆45Updated 3 years ago
- A hierarchical collective communications library with portable optimizations☆28Updated 2 months ago
- A NUMA-aware Graph-structured Analytics Framework☆42Updated 6 years ago
- High-performance, GPU-aware communication library☆84Updated last month
- A User-Transparent Block Cache Enabling High-Performance Out-of-Core Processing with In-Memory Programs☆74Updated last year
- A Library for fast Hash Tables on GPUs☆114Updated 2 years ago
- SnailTrail implementation☆39Updated 5 years ago
- Prototype of OpenSHMEM for NVIDIA GPUs, developed as part of DoE Design Forward☆21Updated 6 years ago
- Artifact for PPoPP 2018 paper "Making Pull-Based Graph Processing Performant"☆23Updated 4 years ago