haoliuhl / language-quantized-autoencodersLinks
Language Quantized AutoEncoders
☆108Updated 2 years ago
Alternatives and similar repositories for language-quantized-autoencoders
Users that are interested in language-quantized-autoencoders are comparing it to the libraries listed below
Sorting:
- https://arxiv.org/abs/2209.15162☆50Updated 2 years ago
- ☆120Updated 2 years ago
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"☆83Updated last year
- PyTorch implementation of LIMoE☆53Updated last year
- ☆50Updated last year
- Patching open-vocabulary models by interpolating weights☆91Updated last year
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆86Updated last year
- ☆129Updated 2 years ago
- ☆55Updated last year
- A PyTorch implementation of Multimodal Few-Shot Learning with Frozen Language Models with OPT.☆43Updated 3 years ago
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆89Updated last year
- Matryoshka Multimodal Models☆112Updated 6 months ago
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)☆63Updated last year
- The official code for paper "EasyGen: Easing Multimodal Generation with a Bidirectional Conditional Diffusion Model and LLMs"☆74Updated 8 months ago
- Code for the paper "Hyperbolic Image-Text Representations", Desai et al, ICML 2023☆174Updated last year
- Implementation of Zorro, Masked Multimodal Transformer, in Pytorch☆97Updated last year
- Implementation for the paper "Reliable Visual Question Answering Abstain Rather Than Answer Incorrectly" (ECCV 2022: https//arxiv.org/abs…☆35Updated 2 years ago
- Sparkles: Unlocking Chats Across Multiple Images for Multimodal Instruction-Following Models☆44Updated last year
- ☆104Updated 2 years ago
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆33Updated 2 years ago
- PyTorch codes for the paper "An Empirical Study of Multimodal Model Merging"☆37Updated last year
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆31Updated 2 years ago
- [NeurIPS 2023] Bootstrapping Vision-Language Learning with Decoupled Language Pre-training☆25Updated last year
- Official code for "TOAST: Transfer Learning via Attention Steering"☆189Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆309Updated 4 months ago
- Model Stock: All we need is just a few fine-tuned models☆119Updated 10 months ago
- [NeurIPS 2023] A faithful benchmark for vision-language compositionality☆82Updated last year
- ☆34Updated last year
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated 2 years ago
- ☆85Updated 2 years ago