shan18 / Perceiver-Resampler-XAttn-Captioning
Generating Captions via Perceiver-Resampler Cross-Attention Networks
☆16Updated 2 years ago
Alternatives and similar repositories for Perceiver-Resampler-XAttn-Captioning:
Users that are interested in Perceiver-Resampler-XAttn-Captioning are comparing it to the libraries listed below
- Utilities for Training Very Large Models☆57Updated 4 months ago
- MEXMA: Token-level objectives improve sentence representations☆40Updated last month
- Here we will test various linear attention designs.☆58Updated 9 months ago
- Official implementation of ECCV24 paper: POA☆24Updated 6 months ago
- ☆48Updated last year
- ☆18Updated 8 months ago
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆58Updated 2 months ago
- Code for T-MARS data filtering☆35Updated last year
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆53Updated 2 years ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆59Updated 4 months ago
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆35Updated last year
- ☆38Updated 10 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆57Updated 3 weeks ago
- ☆44Updated 3 months ago
- Code for "Merging Text Transformers from Different Initializations"☆19Updated 2 weeks ago
- LL3M: Large Language and Multi-Modal Model in Jax☆69Updated 9 months ago
- Code for Zero-Shot Tokenizer Transfer☆120Updated last month
- This repo is based on https://github.com/jiaweizzhao/GaLore☆24Updated 5 months ago
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated 6 months ago
- code for paper "Accessing higher dimensions for unsupervised word translation"☆21Updated last year
- Using FlexAttention to compute attention with different masking patterns☆40Updated 4 months ago
- PyTorch code for System-1.x: Learning to Balance Fast and Slow Planning with Language Models☆21Updated 6 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆46Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated last week
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆38Updated last year
- ☆25Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆81Updated 11 months ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- Un-*** 50 billions multimodality dataset☆24Updated 2 years ago