lucidrains / AMIE-pytorch
Implementation of the general framework for AMIE, from the paper "Towards Conversational Diagnostic AI", out of Google Deepmind
☆60Updated 7 months ago
Alternatives and similar repositories for AMIE-pytorch:
Users that are interested in AMIE-pytorch are comparing it to the libraries listed below
- A repository to house some personal attempts to beat some state-of-the-art for medical datasets☆98Updated last year
- Utilities for Training Very Large Models☆58Updated 6 months ago
- Implementation of Infini-Transformer in Pytorch☆110Updated 3 months ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆23Updated 2 months ago
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆88Updated last year
- ☆48Updated last year
- Easily run PyTorch on multiple GPUs & machines☆45Updated 3 weeks ago
- Video descriptions of research papers relating to foundation models and scaling☆30Updated 2 years ago
- Google Research☆46Updated 2 years ago
- ☆30Updated 10 months ago
- I2M2: Jointly Modeling Inter- & Intra-Modality Dependencies for Multi-modal Learning (NeurIPS 2024)☆19Updated 5 months ago
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆23Updated this week
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆37Updated 3 years ago
- ☆63Updated 6 months ago
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Updated 10 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- ☆46Updated last week
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆55Updated 7 months ago
- Timm model explorer☆39Updated last year
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆118Updated 6 months ago
- ☆43Updated 6 months ago
- This repository contains papers for a comprehensive survey on accelerated generation techniques in Large Language Models (LLMs).☆11Updated 10 months ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆71Updated last year
- Holistic evaluation of multimodal foundation models☆47Updated 8 months ago
- Implementation of Bitune: Bidirectional Instruction-Tuning☆19Updated 10 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated 7 months ago
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆35Updated last month
- A dashboard for exploring timm learning rate schedulers☆19Updated 4 months ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated 2 years ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆54Updated 4 months ago