PrunaAI / prunaLinks
Pruna is a model optimization framework built for developers, enabling you to deliver faster, more efficient models with minimal overhead.
☆703Updated this week
Alternatives and similar repositories for pruna
Users that are interested in pruna are comparing it to the libraries listed below
Sorting:
- A curated list of materials on AI efficiency☆52Updated this week
- Transform datasets at scale. Optimize datasets for fast AI model training.☆485Updated this week
- Multi-backend recommender systems with Keras 3☆125Updated this week
- Next Generation Experimental Tracking for Machine Learning Operations☆294Updated last week
- Scalable and Performant Data Loading☆269Updated last week
- Modular, scalable library to train ML models☆121Updated this week
- Tool for generating high quality Synthetic datasets☆913Updated this week
- ☆179Updated this week
- the scikit-learn sidekick☆465Updated this week
- Efficient optimizers☆208Updated this week
- Repository for CARTE: Context-Aware Representation of Table Entries☆127Updated 2 months ago
- Fast State-of-the-Art Static Embeddings☆1,706Updated this week
- Software design principles for machine learning applications☆359Updated 2 months ago
- 🤗 Benchmark Large Language Models Reliably On Your Data☆318Updated last week
- An implementation of PSGD Kron second-order optimizer for PyTorch☆91Updated 2 months ago
- Thunder gives you PyTorch models superpowers for training and inference. Unlock out-of-the-box optimizations for performance, memory and …☆1,358Updated this week
- ☆152Updated 6 months ago
- ☆150Updated 9 months ago
- A Rust-based data loader which can be used from Python. Processing data per sample at GB/s speeds, covering various use cases eventually.☆103Updated this week
- Best practices & guides on how to write distributed pytorch training code☆433Updated 3 months ago
- For optimization algorithm research and development.☆518Updated this week
- Late Interaction Models Training & Retrieval☆417Updated this week
- A Lightweight Library for AI Observability☆243Updated 3 months ago
- Actually Robust Training - Tool Inspired by Andrej Karpathy "Recipe for training neural networks". It allows you to decompose your Deep…☆45Updated last year
- ☆124Updated 7 months ago
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆423Updated 5 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆185Updated this week
- Library for Jacobian descent with PyTorch. It enables optimization of neural networks with multiple losses (e.g. multi-task learning).☆237Updated this week
- 🧱 Modula software package☆194Updated 2 months ago
- Schedule-Free Optimization in PyTorch☆2,169Updated 2 weeks ago