rapidsai / raftLinks
RAFT contains fundamental widely-used algorithms and primitives for machine learning and information retrieval. The algorithms are CUDA-accelerated and form building blocks for more easily writing high performance applications.
☆905Updated this week
Alternatives and similar repositories for raft
Users that are interested in raft are comparing it to the libraries listed below
Sorting:
- cuVS - a library for vector search and clustering on the GPU☆451Updated this week
- RAPIDS Memory Manager☆592Updated this week
- ☆545Updated this week
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆801Updated 4 months ago
- Framework for evaluating ANNS algorithms on billion scale datasets.☆381Updated last month
- Graph-structured Indices for Scalable, Fast, Fresh and Filtered Approximate Nearest Neighbor Search☆1,390Updated last week
- CUDA implementation of Hierarchical Navigable Small World Graph algorithm☆159Updated 4 years ago
- CUDA Core Compute Libraries☆1,727Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,053Updated last year
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,393Updated this week
- common in-memory tensor structure☆1,023Updated 3 weeks ago
- A throughput-oriented high-performance serving framework for LLMs☆834Updated 3 weeks ago
- A library to analyze PyTorch traces.☆391Updated last week
- cuGraph - RAPIDS Graph Analytics Library☆1,989Updated this week
- CUDA Kernel Benchmarking Library☆670Updated last week
- Knowhere is an open-source vector search engine, integrating FAISS, HNSW, etc.☆212Updated last year
- GGNN: State of the Art Graph-based GPU Nearest Neighbor Search☆162Updated 4 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆479Updated 3 weeks ago
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆340Updated this week
- KvikIO - High Performance File IO☆213Updated this week
- This repository contains tutorials and examples for Triton Inference Server☆726Updated 3 weeks ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆360Updated this week
- Up to 200x Faster Dot Products & Similarity Metrics — for Python, Rust, C, JS, and Swift, supporting f64, f32, f16 real & complex, i8, an…☆1,416Updated 3 weeks ago
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆630Updated 2 weeks ago
- Examples demonstrating available options to program multiple GPUs in a single node or a cluster☆743Updated 4 months ago
- An open-source efficient deep learning framework/compiler, written in python.☆703Updated 2 weeks ago
- The Triton TensorRT-LLM Backend☆858Updated this week
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,573Updated last year
- The Triton backend for the ONNX Runtime.☆153Updated 2 weeks ago
- NVIDIA Inference Xfer Library (NIXL)☆435Updated this week