google-research / tuning_playbookLinks
A playbook for systematically maximizing the performance of deep learning models.
☆29,006Updated last year
Alternatives and similar repositories for tuning_playbook
Users that are interested in tuning_playbook are comparing it to the libraries listed below
Sorting:
- Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Py…☆23,515Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,582Updated last month
- Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)☆9,077Updated last month
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆22,341Updated 11 months ago
- The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights --…☆34,913Updated last week
- 一本系统地教你将深度学习模型的性能最大化的战术手册。☆2,960Updated 2 years ago
- 🎨 ML Visuals contains figures and templates which you can reuse and customize to improve your scientific writing.☆15,515Updated 2 years ago
- A concise but complete full-attention transformer with a set of promising experimental features from various papers☆5,474Updated this week
- Fast and memory-efficient exact attention☆18,656Updated this week
- Kolmogorov Arnold Networks☆15,803Updated 6 months ago
- An open source implementation of CLIP.☆12,257Updated last week
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆30,072Updated last year
- PyTorch code and models for the DINOv2 self-supervised learning method.☆11,207Updated last month
- This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".☆15,057Updated last year
- ☆11,632Updated 4 months ago
- 🧑🏫 60+ Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, fee…☆62,184Updated 2 weeks ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,986Updated this week
- LAVIS - A One-stop Library for Language-Vision Intelligence☆10,797Updated 8 months ago
- An annotated implementation of the Transformer paper.☆6,388Updated last year
- Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.☆29,880Updated this week
- End-to-End Object Detection with Transformers☆14,568Updated last year
- This repository contains demos I made with the Transformers library by HuggingFace.☆11,122Updated last month
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆19,184Updated this week
- 🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal model…☆147,772Updated this week
- 深度学习经典、新论文逐段精读☆30,918Updated 4 months ago
- Mamba SSM architecture☆15,478Updated 2 weeks ago
- Fast and flexible image augmentation library. Paper about the library: https://www.mdpi.com/2078-2489/11/2/125☆15,083Updated last month
- Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, I…☆11,972Updated 3 months ago
- Code release for ConvNeXt model☆6,076Updated 2 years ago
- PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO☆7,024Updated last year