gradio-app / trackioLinks
A lightweight, local-first, and π experiment tracking library from Hugging Face π€
β1,234Updated last week
Alternatives and similar repositories for trackio
Users that are interested in trackio are comparing it to the libraries listed below
Sorting:
- Speed up model training by fixing data loading.β573Updated 2 weeks ago
- A CLI to estimate inference memory requirements for Hugging Face models, written in Python.β501Updated last week
- An interface library for RL post training with environments.β1,090Updated this week
- Scalable and Performant Data Loadingβ362Updated last week
- Hypernetworks that adapt LLMs for specific benchmark tasks using only textual task description as the inputβ938Updated 7 months ago
- dLLM: Simple Diffusion Language Modelingβ1,633Updated 3 weeks ago
- π€ Benchmark Large Language Models Reliably On Your Dataβ425Updated last month
- Where GPUs get cooked π©βπ³π₯β357Updated last week
- β214Updated last week
- Simple & Scalable Pretraining for Neural Architecture Researchβ307Updated last month
- π¨ NeMo Data Designer: A general library for generating high-quality synthetic data from scratch or based on seed data.β654Updated this week
- Inference, Fine Tuning and many more recipes with Gemma family of modelsβ279Updated 6 months ago
- Build your own visual reasoning modelβ417Updated 2 weeks ago
- Best practices & guides on how to write distributed pytorch training codeβ571Updated 3 months ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)β472Updated 2 weeks ago
- β° AI conference deadline countdownsβ320Updated 2 weeks ago
- Collection of scripts and notebooks for OpenAI's latest GPT OSS modelsβ496Updated 5 months ago
- For optimization algorithm research and development.β556Updated 2 weeks ago
- Load compute kernels from the Hubβ381Updated this week
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any Ξ²2 with the Optimal Rate"β434Updated last year
- β540Updated 5 months ago
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wriβ¦β1,437Updated last week
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.β333Updated 2 months ago
- β952Updated 2 months ago
- PyTorch media decoding and encodingβ931Updated this week
- Pruna is a model optimization framework built for developers, enabling you to deliver faster, more efficient models with minimal overheadβ¦β1,075Updated last week
- Recipes for shrinking, optimizing, customizing cutting edge vision models. πβ1,865Updated 3 weeks ago
- Next Generation Experimental Tracking for Machine Learning Operationsβ364Updated 8 months ago
- β695Updated 9 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modelingβ941Updated 2 months ago