PrunaAI / prunaLinks
Pruna is a model optimization framework built for developers, enabling you to deliver faster, more efficient models with minimal overhead.
β1,075Updated last week
Alternatives and similar repositories for pruna
Users that are interested in pruna are comparing it to the libraries listed below
Sorting:
- A lightweight, local-first, and π experiment tracking library from Hugging Face π€β1,234Updated last week
- Next Generation Experimental Tracking for Machine Learning Operationsβ364Updated 8 months ago
- β214Updated last week
- Speed up model training by fixing data loading.β573Updated 2 weeks ago
- An interface library for RL post training with environments.β1,090Updated this week
- TabBench is a benchmark built to evaluate machine learning models on tabular data, focusing on real-world industry use cases.β108Updated 4 months ago
- Fast State-of-the-Art Static Embeddingsβ1,990Updated last month
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wriβ¦β1,437Updated last week
- Multi-backend recommender systems with Keras 3β160Updated last week
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any Ξ²2 with the Optimal Rate"β434Updated last year
- A CLI to estimate inference memory requirements for Hugging Face models, written in Python.β501Updated last week
- An implementation of PSGD Kron second-order optimizer for PyTorchβ98Updated 6 months ago
- dLLM: Simple Diffusion Language Modelingβ1,633Updated 3 weeks ago
- Hypernetworks that adapt LLMs for specific benchmark tasks using only textual task description as the inputβ938Updated 7 months ago
- Actually Robust Training - Tool Inspired by Andrej Karpathy "Recipe for training neural networks". It allows you to decompose your Deepβ¦β43Updated last year
- Where GPUs get cooked π©βπ³π₯β357Updated last week
- Best practices & guides on how to write distributed pytorch training codeβ571Updated 3 months ago
- β° AI conference deadline countdownsβ320Updated 2 weeks ago
- Scalable and Performant Data Loadingβ362Updated last week
- A minimalistic framework for transparently training language models and storing comprehensive checkpoints for in-depth learning dynamics β¦β297Updated 2 months ago
- Official repository for our work on micro-budget training of large-scale diffusion models.β1,550Updated last year
- π¨ NeMo Data Designer: A general library for generating high-quality synthetic data from scratch or based on seed data.β654Updated this week
- A curated list of materials on AI efficiencyβ203Updated last month
- Recipes for shrinking, optimizing, customizing cutting edge vision models. πβ1,865Updated 3 weeks ago
- β540Updated 5 months ago
- π€ Benchmark Large Language Models Reliably On Your Dataβ425Updated last month
- Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuildβ3,068Updated last week
- β300Updated this week
- The CLI for GPUsβ137Updated 2 months ago
- Schedule-Free Optimization in PyTorchβ2,257Updated 8 months ago