Implementation of the GBST block from the Charformer paper, in Pytorch
☆118Jul 15, 2021Updated 4 years ago
Alternatives and similar repositories for charformer-pytorch
Users that are interested in charformer-pytorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Aug 4, 2021Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Jan 27, 2022Updated 4 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆76Dec 4, 2022Updated 3 years ago
- zero-vocab or low-vocab embeddings☆18Jul 17, 2022Updated 3 years ago
- GPT, but made only out of MLPs☆89May 25, 2021Updated 4 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Implementation of Multistream Transformers in Pytorch☆54Jul 31, 2021Updated 4 years ago
- Another attempt at a long-context / efficient transformer by me☆38Apr 11, 2022Updated 4 years ago
- ☆15Sep 24, 2022Updated 3 years ago
- MeCab model trained with OpenKorPos.☆23Jun 19, 2022Updated 3 years ago
- ☆23Oct 30, 2023Updated 2 years ago
- Implementation of Fast Transformer in Pytorch☆176Aug 26, 2021Updated 4 years ago
- Implementation of Kronecker Attention in Pytorch☆19Sep 12, 2020Updated 5 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆166Feb 12, 2024Updated 2 years ago
- (ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.☆21Jul 13, 2022Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Convenient Text-to-Text Training for Transformers☆19Dec 10, 2021Updated 4 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Apr 6, 2022Updated 4 years ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆99Dec 31, 2021Updated 4 years ago
- Indonesian T0 | Instruction-tuning for low-resource and extremely low-resource Austronesian languages☆17Jun 24, 2024Updated last year
- Python Research Framework☆107Nov 3, 2022Updated 3 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆220Feb 13, 2023Updated 3 years ago
- An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbols☆16Aug 3, 2021Updated 4 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- My explorations into editing the knowledge and memories of an attention network☆35Dec 8, 2022Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Oct 19, 2020Updated 5 years ago
- AAAI 2022 Paper: Bet even Beth Harmon couldn't learn chess like that :)☆38Mar 3, 2021Updated 5 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆48Nov 30, 2021Updated 4 years ago
- Python scripts to create noisy and reverberant 2-speaker mixture audio with Libri-Light and WHAM☆17Nov 7, 2024Updated last year
- Antipasti-TF is a lightweight wrapper around Tensorflow for building convolutional neural networks with complex architechtures.☆16May 12, 2017Updated 8 years ago
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆41Dec 21, 2025Updated 3 months ago
- Implementation of Metaformer, but in an autoregressive manner☆26Jun 21, 2022Updated 3 years ago
- Code for EMNLP 2021 paper: Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting☆17Nov 30, 2021Updated 4 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆67Jan 10, 2023Updated 3 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Pytorch implementation and pre-trained Japanese model for CANINE, the efficient character-level transformer.☆89Nov 3, 2023Updated 2 years ago
- Implementation of a Transformer, but completely in Triton☆279Apr 5, 2022Updated 4 years ago
- Axial Positional Embedding for Pytorch☆84Feb 25, 2025Updated last year
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆207Aug 26, 2023Updated 2 years ago
- List of corpora annotated for coreference for different languages☆18Aug 8, 2024Updated last year
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Aug 5, 2023Updated 2 years ago
- ☆542Feb 13, 2024Updated 2 years ago