lucasb-eyer / cnn_vit_benchmarksLinks
☆16Updated last year
Alternatives and similar repositories for cnn_vit_benchmarks
Users that are interested in cnn_vit_benchmarks are comparing it to the libraries listed below
Sorting:
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆159Updated last year
- Memory-Efficient CUDA kernels for training ConvNets with PyTorch.☆42Updated 7 months ago
- Recaption large (Web)Datasets with vllm and save the artifacts.☆52Updated 10 months ago
- ☆59Updated last year
- ☆28Updated 2 months ago
- Contains my experiments with the `big_vision` repo to train ViTs on ImageNet-1k.☆22Updated 2 years ago
- Video descriptions of research papers relating to foundation models and scaling☆31Updated 2 years ago
- Notebooks to demonstrate TimmWrapper☆16Updated 8 months ago
- Timm model explorer☆42Updated last year
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated last year
- Fine-tuning OpenAI CLIP Model for Image Search on medical images☆77Updated 3 years ago
- Sparse Autoencoders for Stable Diffusion XL models.☆69Updated 2 months ago
- Notebooks for fine tuning pali gemma☆117Updated 5 months ago
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models" ICLR 2024☆105Updated last year
- Load any clip model with a standardized interface☆22Updated last week
- ☆65Updated last year
- Simplify Your Visual Data Ops. Find and visualize issues with your computer vision datasets such as duplicates, anomalies, data leakage, …☆69Updated 4 months ago
- Focused on fast experimentation and simplicity☆75Updated 9 months ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆246Updated 8 months ago