hegongshan / Storage-for-AI-PaperLinks
Accelerating AI Training and Inference from Storage Perspective (Must-read Papers on Storage for AI)
☆56Updated last week
Alternatives and similar repositories for Storage-for-AI-Paper
Users that are interested in Storage-for-AI-Paper are comparing it to the libraries listed below
Sorting:
- GeminiFS: A Companion File System for GPUs☆69Updated 10 months ago
- High Performance KV Cache Store for LLM☆43Updated 3 weeks ago
- Rcmp: Reconstructing RDMA-based Memory Disaggregation via CXL☆60Updated 2 years ago
- Hydra adds resilience and high availability to remote memory solutions.☆33Updated 3 years ago
- This is the implementation repository of our OSDI'23 paper: SMART: A High-Performance Adaptive Radix Tree for Disaggregated Memory.☆64Updated last year
- This is the implementation repository of our FAST'23 paper: FUSEE: A Fully Memory-Disaggregated Key-Value Store.☆60Updated 2 years ago
- [FAST 2022] FORD: Fast One-sided RDMA-based Distributed Transactions for Disaggregated Persistent Memory☆62Updated last year
- AIFM: High-Performance, Application-Integrated Far Memory☆124Updated 2 years ago
- [OSDI 2024] Motor: Enabling Multi-Versioning for Distributed Transactions on Disaggregated Memory☆50Updated last year
- Fast RDMA-based Ordered Key-Value Store using Remote Learned Cache☆117Updated 4 years ago
- system paper reading notes☆246Updated 3 months ago
- ROLEX: A Scalable RDMA-oriented Learned Key-Value Store for Disaggregated Memory Systems☆80Updated 2 years ago
- Repository for FAST'23 paper GL-Cache: Group-level Learning for Efficient and High-Performance Caching☆51Updated 2 years ago
- RLib is a header-only library for easier usage of RDMA.☆45Updated 4 years ago
- A User-Transparent Block Cache Enabling High-Performance Out-of-Core Processing with In-Memory Programs☆75Updated 2 weeks ago
- dLSM: An LSM-Based Index for RDMA-Enabled Memory Disaggregation☆36Updated 2 years ago
- Sherman: A Write-Optimized Distributed B+Tree Index on Disaggregated Memory☆110Updated last year
- An OS kernel module for fast **remote** fork using advanced datacenter networking (RDMA).☆69Updated 10 months ago
- PetPS: Supporting Huge Embedding Models with Tiered Memory☆33Updated last year
- Ths is a fast RDMA abstraction layer that works both in the kernel and user-space.☆58Updated last year
- ☆28Updated last year
- High performance RDMA-based distributed feature collection component for training GNN model on EXTREMELY large graph☆56Updated 3 years ago
- ☆213Updated 2 years ago
- Code for "Baleen: ML Admission & Prefetching for Flash Caches" (FAST 2024).☆27Updated last year
- SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training☆35Updated 2 years ago
- ☆56Updated 4 years ago
- DINOMO: An Elastic, Scalable, High-Performance Key-Value Store for Disaggregated Persistent Memory (PVLDB 2022, VLDB 2023)☆37Updated 2 years ago
- This is the source code for our (Tobias Ziegler, Jacob Nelson-Slivon, Carsten Binnig and Viktor Leis) published paper at SIGMOD’23: Desig…☆28Updated last year
- ☆34Updated 5 months ago
- Scaling Up Memory Disaggregated Applications with SMART☆32Updated last year