☆33Apr 12, 2021Updated 5 years ago
Alternatives and similar repositories for RFA
Users that are interested in RFA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This repository contains PyTorch implementations of various random feature maps for dot product kernels.☆22Jul 13, 2024Updated last year
- A method for evaluating the high-level coherence of machine-generated texts. Identifies high-level coherence issues in transformer-based …☆11Mar 18, 2023Updated 3 years ago
- Python implementation of paper "AntisymmetricRNN: A Dynamical System View on Recurrent Neural Networks"☆15Aug 2, 2019Updated 6 years ago
- Posterior Control of Blackbox Generation☆23May 2, 2020Updated 5 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆51Jun 11, 2025Updated 10 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- AMR-parser. Code for EMNLP2019 paper "Core Semantic First: A Top-down Approach for AMR Parsing."☆11Feb 23, 2020Updated 6 years ago
- Implementation of Cascaded Head-colliding Attention (ACL'2021)☆11Sep 16, 2021Updated 4 years ago
- ☆13Oct 18, 2023Updated 2 years ago
- Code for reversible recurrent neural networks☆40Jan 20, 2019Updated 7 years ago
- ☆12Jun 5, 2024Updated last year
- ☆15Nov 19, 2025Updated 4 months ago
- PyTorch implementation of Towards Efficient Training for Neural Network Quantization☆16Jan 16, 2020Updated 6 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆113Jun 10, 2021Updated 4 years ago
- Benchmarking Attention Mechanism in Vision Transformers.☆20Oct 10, 2022Updated 3 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Sparse Attention with Linear Units☆20Apr 21, 2021Updated 4 years ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆46Oct 17, 2022Updated 3 years ago
- Dynamic config system based on python classes☆12Jan 27, 2023Updated 3 years ago
- ☆13Nov 13, 2020Updated 5 years ago
- ☆18Oct 4, 2022Updated 3 years ago
- The code for paper "Diversifying Dialog Generation via Adaptive Label Smoothing" in ACL 2021.☆26Jun 7, 2021Updated 4 years ago
- [ICML 2024] Self-Infilling Code Generation☆18May 5, 2024Updated last year
- [NeurIPS 2025] This is the official repository for "RAD: Towards Trustworthy Retrieval-Augmented Multi-modal Clinical Diagnosis"☆27Nov 21, 2025Updated 4 months ago
- https://interactivetraining.ai/☆17Oct 2, 2025Updated 6 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆14Nov 20, 2022Updated 3 years ago
- NeurIPS'22 Oral: EquiVSet - Learning Neural Set Functions Under the Optimal Subset Oracle☆21Dec 23, 2022Updated 3 years ago
- ToeffiPy is a PyTorch like autograd/deep learning library based only on NumPy.☆16Mar 28, 2022Updated 4 years ago
- Locality Preserving Dense Graph Convolutional Networks with Graph Context-Aware Node Representations☆11Jan 21, 2021Updated 5 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Aug 30, 2023Updated 2 years ago
- Pytorch2Jax is a small Python library that provides functions that wraps PyTorch models into Jax functions and Flax modules.☆21Feb 20, 2023Updated 3 years ago
- ☆10Jun 21, 2021Updated 4 years ago
- MetA-Train to Explain☆18Feb 15, 2022Updated 4 years ago
- Video examples of "Appearance Composing GAN: A General Method for Appearance-Controllable Human Video Motion Transfer"☆15Dec 28, 2020Updated 5 years ago
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆62May 11, 2021Updated 4 years ago
- ☆10Apr 8, 2018Updated 8 years ago
- reproduction of the CVPR'21 paper Distilling Knowledge via Knowledge Review for the ML Reproducibility Challenge 2021☆10Apr 16, 2022Updated 3 years ago
- ☆13Oct 22, 2023Updated 2 years ago
- Pytorch implement of the paper Neural Canonical Transformation with Symplectic Flows☆33Mar 9, 2020Updated 6 years ago
- Neuromorphic ASIC with 96 neurons on Tiny Tapeout 7☆11May 25, 2024Updated last year
- Trains Transformer model variants. Data isn't shuffled between batches.☆143Oct 5, 2022Updated 3 years ago