shan18 / Perceiver-Resampler-XAttn-CaptioningLinks
Generating Captions via Perceiver-Resampler Cross-Attention Networks
☆17Updated 2 years ago
Alternatives and similar repositories for Perceiver-Resampler-XAttn-Captioning
Users that are interested in Perceiver-Resampler-XAttn-Captioning are comparing it to the libraries listed below
Sorting:
- Utilities for Training Very Large Models☆58Updated 9 months ago
- Diffusion-based markup-to-image generation☆82Updated 2 years ago
- Contains my experiments with the `big_vision` repo to train ViTs on ImageNet-1k.☆22Updated 2 years ago
- Load any clip model with a standardized interface☆21Updated last year
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆51Updated 3 months ago
- ☆15Updated 11 months ago
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated 11 months ago
- code for paper "Accessing higher dimensions for unsupervised word translation"☆21Updated 2 years ago
- ☆34Updated 10 months ago
- Official code and data for NeurIPS 2023 paper "ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial …☆39Updated last year
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆54Updated 2 years ago
- Un-*** 50 billions multimodality dataset☆23Updated 2 years ago
- An open source implementation of CLIP.☆32Updated 2 years ago
- ☆13Updated 10 months ago
- JAX implementation ViT-VQGAN☆83Updated 2 years ago
- ☆51Updated last year
- ☆64Updated last year
- See details in https://github.com/pytorch/xla/blob/r1.12/torch_xla/distributed/fsdp/README.md☆24Updated 2 years ago
- ☆23Updated 7 months ago
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- Understanding how features learned by neural networks evolve throughout training☆36Updated 8 months ago
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated last year
- Code for the paper "On the Expressivity Role of LayerNorm in Transformers' Attention" (Findings of ACL'2023)☆56Updated 9 months ago
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆59Updated 7 months ago
- FID computation in Jax/Flax.☆28Updated last year
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- Latent Diffusion Language Models☆68Updated last year
- Utilities for PyTorch distributed☆24Updated 4 months ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆75Updated last year
- ☆53Updated last year