albanie / foundation-modelsLinks
Video descriptions of research papers relating to foundation models and scaling
☆31Updated 2 years ago
Alternatives and similar repositories for foundation-models
Users that are interested in foundation-models are comparing it to the libraries listed below
Sorting:
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated last year
- Implementation of the general framework for AMIE, from the paper "Towards Conversational Diagnostic AI", out of Google Deepmind☆68Updated last year
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆103Updated last year
- An official PyTorch implementation for CLIPPR☆29Updated 2 years ago
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models" ICLR 2024☆104Updated last year
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago
- Official code for "TOAST: Transfer Learning via Attention Steering"☆188Updated 2 years ago
- Code release for "Improved baselines for vision-language pre-training"☆60Updated last year
- Conference schedule, top papers, and analysis of the data for NeurIPS 2023!☆120Updated last year
- ☆23Updated 8 months ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆78Updated last year
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated 2 years ago
- ☆19Updated 2 years ago
- Code and models for the paper "The effectiveness of MAE pre-pretraining for billion-scale pretraining" https://arxiv.org/abs/2303.13496☆92Updated 5 months ago
- ☆35Updated last year
- ☆52Updated 8 months ago
- understanding model mistakes with human annotations☆106Updated 2 years ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆58Updated 9 months ago
- Visualizing representations with diffusion based conditional generative model.☆100Updated 2 years ago
- Code for the paper: "No Zero-Shot Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance" [NeurI…☆90Updated last year
- Patching open-vocabulary models by interpolating weights☆91Updated last year
- PyTorch implementation of R-MAE https//arxiv.org/abs/2306.05411☆114Updated 2 years ago
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆100Updated 5 months ago
- ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models (ICLR 2024, Official Implementation)☆16Updated last year
- ☆183Updated 11 months ago
- ☆30Updated 2 years ago
- Original code base for On Pretraining Data Diversity for Self-Supervised Learning☆14Updated 8 months ago
- Code for our ICLR 2024 paper "PerceptionCLIP: Visual Classification by Inferring and Conditioning on Contexts"☆78Updated last year
- ☆134Updated last year
- PyTorch Implementation of Object Recognition as Next Token Prediction [CVPR'24 Highlight]☆181Updated 4 months ago