Simple, safe way to store and distribute tensors
β3,701Apr 2, 2026Updated 2 weeks ago
Alternatives and similar repositories for safetensors
Users that are interested in safetensors are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β9,608Updated this week
- Accessible large language models via k-bit quantization for PyTorch.β8,121Updated this week
- Minimalist ML framework for Rustβ20,024Updated this week
- Large Language Model Text Generation Inferenceβ10,841Mar 21, 2026Updated 3 weeks ago
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β3,358Updated this week
- Serverless GPU API endpoints on Runpod - Bonus Credits β’ AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Hackable and optimized Transformers building blocks, supporting a composable construction.β10,417Mar 30, 2026Updated 2 weeks ago
- Development repository for the Triton language and compilerβ18,974Updated this week
- π₯ Fast State-of-the-Art Tokenizers optimized for Research and Productionβ10,620Apr 11, 2026Updated last week
- Fast and memory-efficient exact attentionβ23,344Updated this week
- Tensor library for machine learningβ14,459Updated this week
- π€ PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.β20,929Apr 10, 2026Updated last week
- Flax is a neural network library for JAX that is designed for flexibility.β7,161Updated this week
- π€ Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.β33,336Updated this week
- Rust bindings for the C++ api of PyTorch.β5,346Mar 26, 2026Updated 3 weeks ago
- GPUs on demand by Runpod - Special Offer Available β’ AdRun AI, ML, and HPC workloads on powerful cloud GPUsβwithout limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.β42,141Updated this week
- PyTorch extensions for high performance and large scale training.β3,405Apr 26, 2025Updated 11 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMsβ76,536Updated this week
- Train transformer language models with reinforcement learning.β18,054Updated this week
- PyTorch native quantization and sparsity for training and inferenceβ2,786Updated this week
- Composable transformations of Python+NumPy programs: differentiate, vectorize, JIT to GPU/TPU, and moreβ35,370Updated this week
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)β9,456Apr 9, 2026Updated last week
- Minimalistic large language model 3D-parallelism trainingβ2,654Apr 7, 2026Updated last week
- tiktoken is a fast BPE tokeniser for use with OpenAI's models.β17,908Mar 27, 2026Updated 3 weeks ago
- Serverless GPU API endpoints on Runpod - Bonus Credits β’ AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- π€ Evaluate: A library for easily evaluating machine learning models and datasets.β2,437Apr 8, 2026Updated last week
- SGLang is a high-performance serving framework for large language models and multimodal models.β26,025Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizatβ¦β13,354Updated this week
- Pretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.β31,042Apr 7, 2026Updated last week
- A blazing fast inference solution for text embeddings modelsβ4,684Updated this week
- Transformer related optimization, including BERT, GPTβ6,412Mar 27, 2024Updated 2 years ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.β1,077Apr 17, 2024Updated 2 years ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (Nβ¦β4,720Apr 9, 2026Updated last week
- Build and share delightful machine learning apps, all in Python. π Star to support our work!β42,340Updated this week
- Managed Database hosting by DigitalOcean β’ AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Ongoing research training transformer models at scaleβ16,073Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on Hβ¦β3,280Updated this week
- Burn is a next generation tensor library and Deep Learning Framework that doesn't compromise on flexibility, efficiency and portability.β14,843Apr 10, 2026Updated last week
- LLM inference in C/C++β103,237Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.β10,573Updated this week
- PyTorch native post-training libraryβ5,728Apr 10, 2026Updated last week
- A pytorch quantization backend for optimumβ1,036Apr 2, 2026Updated 2 weeks ago