[ICML 2021 Oral] We show pure attention suffers rank collapse, and how different mechanisms combat it.
☆172Mar 8, 2021Updated 5 years ago
Alternatives and similar repositories for attention-rank-collapse
Users that are interested in attention-rank-collapse are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Co-Supervised Learning: Improving Weak-to-Strong Generalization with Hierarchical Mixture of Experts☆16Feb 26, 2024Updated 2 years ago
- ☆12Sep 26, 2019Updated 6 years ago
- We investigated corruption robustness across different architectures including Convolutional Neural Networks, Vision Transformers, and th…☆16Oct 28, 2021Updated 4 years ago
- ☆13Feb 16, 2021Updated 5 years ago
- Code for the TCS paper "On the performance of learned data structures" and the ICML paper "Why are learned indexes so effective?"☆21May 9, 2021Updated 4 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- Efficient Householder Transformation in PyTorch☆69Jul 6, 2021Updated 4 years ago
- ☆389Oct 18, 2023Updated 2 years ago
- [NeurIPS'20] Code for the Paper Compositional Visual Generation and Inference with Energy Based Models☆47Mar 24, 2023Updated 3 years ago
- A study of performance of optimal transport.☆10Jul 4, 2020Updated 5 years ago
- A simple Transformer where the softmax has been replaced with normalization☆20Sep 11, 2020Updated 5 years ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆879Oct 30, 2023Updated 2 years ago
- ☆26Nov 23, 2023Updated 2 years ago
- ☆20Feb 26, 2021Updated 5 years ago
- ☆119Feb 11, 2025Updated last year
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- This repository is for the paper "A generative nonparametric Bayesian model for whole genomes"☆15Jun 7, 2023Updated 2 years ago
- Official codebase for Pretrained Transformers as Universal Computation Engines.☆246Jan 14, 2022Updated 4 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆76Dec 4, 2022Updated 3 years ago
- MNIST, but with Bezier curves instead of pixels☆15Oct 29, 2021Updated 4 years ago
- ☆14Feb 26, 2024Updated 2 years ago
- ☆19Jun 21, 2025Updated 9 months ago
- The source codes for D2AGE model. Distance-aware DAG Embedding for Proximity Search on Heterogeneous Graphs.☆12Feb 20, 2018Updated 8 years ago
- Training scripts for paper Miceli Barone et al. 2017 "Deep Architectures for Neural Machine Translation"☆11Jul 13, 2017Updated 8 years ago
- ☆45Mar 3, 2023Updated 3 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆31Oct 13, 2021Updated 4 years ago
- Implementation of Fast Transformer in Pytorch☆176Aug 26, 2021Updated 4 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆166Feb 12, 2024Updated 2 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Pretraining summarization models using a corpus of nonsense☆13Sep 28, 2021Updated 4 years ago
- computation of convolutional kernels (CKN and NTK) in C++☆14Dec 13, 2022Updated 3 years ago
- [NeurIPS 2020]. COPT - Coordinated Optimal Transport on Graphs☆16Dec 22, 2020Updated 5 years ago
- Disentangled Non-Local Neural Networks☆84Dec 7, 2020Updated 5 years ago
- ☆20Mar 22, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆826May 5, 2024Updated last year
- Official repository for the "Big Transfer (BiT): General Visual Representation Learning" paper.☆1,538Jul 30, 2024Updated last year
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆17Jan 8, 2025Updated last year
- This is the implementation of paper "Learning to Ask Conversational Questions by Optimizing Levenshtein Distance".☆10Jul 5, 2021Updated 4 years ago
- ☆39Jul 13, 2022Updated 3 years ago
- Official DeiT repository☆4,329Mar 15, 2024Updated 2 years ago
- Code for 'Contrastive Multi-Document Question Generation'☆11Oct 16, 2022Updated 3 years ago