fabiocannizzo / FastBinarySearchLinks
Fast and vectorizable algorithms for searching in a vector of sorted floating point numbers
☆152Updated 10 months ago
Alternatives and similar repositories for FastBinarySearch
Users that are interested in FastBinarySearch are comparing it to the libraries listed below
Sorting:
- LLM training in simple, raw C/CUDA☆107Updated last year
- High-Performance SGEMM on CUDA devices☆107Updated 9 months ago
- Implementation of the paper "Lossless Compression of Vector IDs for Approximate Nearest Neighbor Search" by Severo et al.☆82Updated 9 months ago
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- Make triton easier☆48Updated last year
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆305Updated this week
- Standalone commandline CLI tool for compiling Triton kernels☆20Updated last year
- Inference of Mamba models in pure C☆192Updated last year
- Simple high-throughput inference library☆149Updated 5 months ago
- llama.cpp to PyTorch Converter☆34Updated last year
- A minimalistic C++ Jinja templating engine for LLM chat templates☆193Updated last month
- A tracing JIT compiler for PyTorch☆13Updated 3 years ago
- Clover: Quantized 4-bit Linear Algebra Library☆113Updated 7 years ago
- ☆71Updated 7 months ago
- 🏙 Interactive performance profiling and debugging tool for PyTorch neural networks.☆64Updated 9 months ago
- Gpu benchmark☆72Updated 9 months ago
- ☆21Updated 7 months ago
- Implementation of "Efficient Multi-vector Dense Retrieval with Bit Vectors", ECIR 2024☆66Updated last week
- extensible collectives library in triton☆90Updated 7 months ago
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆67Updated 6 months ago
- Parallel framework for training and fine-tuning deep neural networks☆65Updated last week
- Nod.ai 🦈 version of 👻 . You probably want to start at https://github.com/nod-ai/shark for the product and the upstream IREE repository …☆106Updated 9 months ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆74Updated 8 months ago
- Benchmarks to capture important workloads.☆31Updated 9 months ago
- ☆28Updated 9 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- Hydragen: High-Throughput LLM Inference with Shared Prefixes☆43Updated last year
- A collection of reproducible inference engine benchmarks☆37Updated 6 months ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆181Updated 2 months ago