lucidrains / AMIE-pytorchLinks
Implementation of the general framework for AMIE, from the paper "Towards Conversational Diagnostic AI", out of Google Deepmind
β66Updated 10 months ago
Alternatives and similar repositories for AMIE-pytorch
Users that are interested in AMIE-pytorch are comparing it to the libraries listed below
Sorting:
- Implementation of π» Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorchβ89Updated last year
- Implementation of Infini-Transformer in Pytorchβ110Updated 7 months ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-expertsβ120Updated 9 months ago
- Implementation of the Llama architecture with RLHF + Q-learningβ166Updated 6 months ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)β75Updated last year
- A repository to house some personal attempts to beat some state-of-the-art for medical datasetsβ99Updated last year
- Implementation of Zorro, Masked Multimodal Transformer, in Pytorchβ97Updated last year
- Utilities for Training Very Large Modelsβ58Updated 10 months ago
- Explorations into the recently proposed Taylor Series Linear Attentionβ100Updated 11 months ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorchβ25Updated 6 months ago
- β81Updated last year
- Code for the paper: "No Zero-Shot Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance" [NeurIβ¦β90Updated last year
- Model Stock: All we need is just a few fine-tuned modelsβ119Updated 10 months ago
- Sparse and discrete interpretability tool for neural networksβ63Updated last year
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorchβ103Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amountβ¦β54Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptationβ41Updated 9 months ago
- Official code for the ICML 2024 paper "The Entropy Enigma: Success and Failure of Entropy Minimization"β53Updated last year
- Official implementation of MAIA, A Multimodal Automated Interpretability Agentβ83Updated last month
- HGRN2: Gated Linear RNNs with State Expansionβ55Updated 11 months ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorchβ39Updated 3 years ago
- Official code for "TOAST: Transfer Learning via Attention Steering"β189Updated last year
- We study toy models of skill learning.β29Updated 6 months ago
- β51Updated last year
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"β60Updated 8 months ago
- β41Updated last year
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPTβ215Updated 11 months ago
- Google Researchβ46Updated 2 years ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)β52Updated 4 months ago
- A regression-alike loss to improve numerical reasoning in language modelsβ24Updated 2 weeks ago