foundation-model-stack / fastsafetensorsLinks
High-performance safetensors model loader
☆53Updated last month
Alternatives and similar repositories for fastsafetensors
Users that are interested in fastsafetensors are comparing it to the libraries listed below
Sorting:
- The driver for LMCache core to run in vLLM☆47Updated 6 months ago
- ☆31Updated 4 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) serv…☆190Updated this week
- CUDA checkpoint and restore utility☆360Updated 6 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆149Updated last week
- NVIDIA Inference Xfer Library (NIXL)☆557Updated this week
- Module, Model, and Tensor Serialization/Deserialization☆256Updated this week
- DeeperGEMM: crazy optimized version☆71Updated 3 months ago
- ☆47Updated last year
- Fast and memory-efficient exact attention☆87Updated last week
- ☆195Updated 3 months ago
- ☆74Updated 4 months ago
- extensible collectives library in triton☆88Updated 4 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆80Updated 11 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆120Updated 8 months ago
- KV cache store for distributed LLM inference☆311Updated 2 months ago
- Bamboo-7B Large Language Model☆93Updated last year
- kernels, of the mega variety☆472Updated 2 months ago
- ☆120Updated last year
- ☆238Updated last week
- KV cache compression for high-throughput LLM inference☆134Updated 6 months ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆226Updated 9 months ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆206Updated this week
- Odysseus: Playground of LLM Sequence Parallelism☆76Updated last year
- Perplexity GPU Kernels☆435Updated 2 weeks ago
- A prefill & decode disaggregated LLM serving framework with shared GPU memory and fine-grained compute isolation.☆104Updated 3 months ago
- Stateful LLM Serving☆81Updated 5 months ago
- Fast low-bit matmul kernels in Triton☆349Updated this week
- Aims to implement dual-port and multi-qp solutions in deepEP ibrc transport☆59Updated 3 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆124Updated 8 months ago