PrunaAI / prunaLinks
Pruna is a model optimization framework built for developers, enabling you to deliver faster, more efficient models with minimal overhead.
β902Updated last week
Alternatives and similar repositories for pruna
Users that are interested in pruna are comparing it to the libraries listed below
Sorting:
- A lightweight, local-first, and free experiment tracking library from Hugging Face π€β945Updated this week
- Speed up model training by fixing data loading.β551Updated this week
- Fast State-of-the-Art Static Embeddingsβ1,863Updated last week
- Next Generation Experimental Tracking for Machine Learning Operationsβ346Updated 4 months ago
- A curated list of materials on AI efficiencyβ173Updated 2 weeks ago
- π€ Benchmark Large Language Models Reliably On Your Dataβ404Updated 2 weeks ago
- A Lossless Compression Library for AI pipelinesβ283Updated 3 months ago
- Tool for generating high quality Synthetic datasetsβ1,282Updated 2 weeks ago
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wriβ¦β1,414Updated this week
- Actually Robust Training - Tool Inspired by Andrej Karpathy "Recipe for training neural networks". It allows you to decompose your Deepβ¦β44Updated last year
- Best practices & guides on how to write distributed pytorch training codeβ500Updated last week
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any Ξ²2 with the Optimal Rate"β425Updated 10 months ago
- A microframework on top of PyTorch with first-class citizen APIs for foundation model adaptationβ835Updated last month
- Hypernetworks that adapt LLMs for specific benchmark tasks using only textual task description as the inputβ893Updated 4 months ago
- Scalable and Performant Data Loadingβ308Updated this week
- Multi-backend recommender systems with Keras 3β144Updated this week
- TabBench is a benchmark built to evaluate machine learning models on tabular data, focusing on real-world industry use cases.β105Updated 2 weeks ago
- Build your own inference engine with expert control. Deploy agents, MCP servers, models, RAG, pipelines and more. No MLOps. No YAML.β3,591Updated this week
- Recipes for shrinking, optimizing, customizing cutting edge vision models. πβ1,635Updated last month
- β211Updated last week
- A minimalistic framework for transparently training language models and storing comprehensive checkpoints for in-depth learning dynamics β¦β289Updated 4 months ago
- Ultrafast serverless GPU inference, sandboxes, and background jobsβ1,388Updated this week
- Build datasets using natural languageβ532Updated last month
- F Lite is a 10B parameter diffusion model created by Freepik and Fal, trained exclusively on copyright-safe and SFW content.β414Updated last month
- Fast Semantic Text Deduplication & Filteringβ816Updated 2 weeks ago
- Where GPUs get cooked π©βπ³π₯β293Updated last month
- A Lightweight Library for AI Observabilityβ251Updated 7 months ago
- β° AI conference deadline countdownsβ284Updated 3 weeks ago
- Official repository for our work on micro-budget training of large-scale diffusion models.β1,517Updated 9 months ago
- Library for Jacobian descent with PyTorch. It enables the optimization of neural networks with multiple losses (e.g. multi-task learning)β¦β271Updated this week