Noahs-ARK / RFAView external linksLinks
☆33Apr 12, 2021Updated 4 years ago
Alternatives and similar repositories for RFA
Users that are interested in RFA are comparing it to the libraries listed below
Sorting:
- This repository contains PyTorch implementations of various random feature maps for dot product kernels.☆22Jul 13, 2024Updated last year
- Python implementation of paper "AntisymmetricRNN: A Dynamical System View on Recurrent Neural Networks"☆15Aug 2, 2019Updated 6 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Mar 7, 2023Updated 2 years ago
- ☆14May 14, 2019Updated 6 years ago
- Posterior Control of Blackbox Generation☆23May 2, 2020Updated 5 years ago
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Mar 18, 2023Updated 2 years ago
- ☆12Jan 29, 2021Updated 5 years ago
- Implementation of Cascaded Head-colliding Attention (ACL'2021)☆11Sep 16, 2021Updated 4 years ago
- AMR-parser. Code for EMNLP2019 paper "Core Semantic First: A Top-down Approach for AMR Parsing."☆11Feb 23, 2020Updated 5 years ago
- NeurIPS'23: Energy Discrepancies: A Score-Independent Loss for Energy-Based Models☆17Oct 22, 2024Updated last year
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆34Oct 30, 2020Updated 5 years ago
- ToeffiPy is a PyTorch like autograd/deep learning library based only on NumPy.☆16Mar 28, 2022Updated 3 years ago
- ☆14Nov 20, 2022Updated 3 years ago
- Sparse Attention with Linear Units☆20Apr 21, 2021Updated 4 years ago
- ☆18Mar 9, 2023Updated 2 years ago
- ☆20May 30, 2024Updated last year
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆111Jun 10, 2021Updated 4 years ago
- ☆42Sep 20, 2022Updated 3 years ago
- Benchmarking Attention Mechanism in Vision Transformers.☆20Oct 10, 2022Updated 3 years ago
- Code for ICML 2020 paper: Do RNN and LSTM have Long Memory?☆17Jan 6, 2021Updated 5 years ago
- ☆15Dec 5, 2019Updated 6 years ago
- ☆18Oct 4, 2022Updated 3 years ago
- ☆27Jul 28, 2025Updated 6 months ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆198Dec 2, 2022Updated 3 years ago
- NeurIPS'22 Oral: EquiVSet - Learning Neural Set Functions Under the Optimal Subset Oracle☆21Dec 23, 2022Updated 3 years ago
- ☆46Oct 11, 2023Updated 2 years ago
- Official implementation of NeurIPS'21: Implicit SVD for Graph Representation Learning☆21Nov 4, 2021Updated 4 years ago
- Visualization of mean field and neural tangent kernel regime☆23Jul 25, 2024Updated last year
- ☆29Nov 30, 2021Updated 4 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆62May 11, 2021Updated 4 years ago
- Implementation of random Fourier features for kernel method, like support vector machine and Gaussian process model☆103Oct 29, 2024Updated last year
- ☆27Feb 19, 2024Updated last year
- ☆28Jul 28, 2023Updated 2 years ago
- PyTorch implementation for our ICLR 2024 paper "Diffusion Generative Flow Samplers: Improving learning signals through partial trajectory…☆26Dec 21, 2023Updated 2 years ago
- ☆24Nov 17, 2021Updated 4 years ago
- Reference implementation for SPECTRE: Spectral Conditioning Helps to Overcome the Expressivity Limits of One-shot Graph Generators (ICML …☆28Aug 23, 2022Updated 3 years ago
- Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020☆29Jan 6, 2022Updated 4 years ago
- Official implementation of the paper "Topographic VAEs learn Equivariant Capsules"☆81Mar 4, 2022Updated 3 years ago
- ☆32May 30, 2024Updated last year