PrunaAI / prunaLinks
Pruna is a model optimization framework built for developers, enabling you to deliver faster, more efficient models with minimal overhead.
β1,080Updated this week
Alternatives and similar repositories for pruna
Users that are interested in pruna are comparing it to the libraries listed below
Sorting:
- A lightweight, local-first, and π experiment tracking library from Hugging Face π€β1,244Updated last week
- A CLI to estimate inference memory requirements for Hugging Face models, written in Python.β646Updated last week
- Fast State-of-the-Art Static Embeddingsβ1,992Updated last month
- Speed up model training by fixing data loading.β575Updated this week
- π¨ NeMo Data Designer: A general library for generating high-quality synthetic data from scratch or based on seed data.β674Updated this week
- An interface library for RL post training with environments.β1,112Updated this week
- A curated list of materials on AI efficiencyβ205Updated last month
- Where GPUs get cooked π©βπ³π₯β362Updated 2 weeks ago
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wriβ¦β1,439Updated this week
- β214Updated last week
- π€ Benchmark Large Language Models Reliably On Your Dataβ426Updated last month
- Hypernetworks that adapt LLMs for specific benchmark tasks using only textual task description as the inputβ938Updated 7 months ago
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any Ξ²2 with the Optimal Rate"β434Updated last year
- Next Generation Experimental Tracking for Machine Learning Operationsβ364Updated 8 months ago
- A Lossless Compression Library for AI pipelinesβ299Updated 7 months ago
- Scalable and Performant Data Loadingβ364Updated this week
- Fast Multimodal Semantic Deduplication & Filteringβ882Updated 2 weeks ago
- Best practices & guides on how to write distributed pytorch training codeβ575Updated 3 months ago
- prime is a framework for efficient, globally distributed training of AI models over the internet.β850Updated 2 months ago
- PyTorch native quantization and sparsity for training and inferenceβ2,657Updated last week
- PyTorch Single Controllerβ957Updated this week
- Official inference library for pre-processing of Mistral modelsβ849Updated last week
- Inference, Fine Tuning and many more recipes with Gemma family of modelsβ279Updated 6 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorchβ98Updated 6 months ago
- Minimalistic 4D-parallelism distributed training framework for education purposeβ2,058Updated 5 months ago
- For optimization algorithm research and development.β558Updated 3 weeks ago
- Ultrafast serverless GPU inference, sandboxes, and background jobsβ1,556Updated 3 weeks ago
- PyTorch media decoding and encodingβ940Updated this week
- dLLM: Simple Diffusion Language Modelingβ1,693Updated last month
- Multi-backend recommender systems with Keras 3β160Updated 2 weeks ago