seanswyi / transformer-implementationLinks
Personal implementation of the Transformer paper.
β22Updated 2 years ago
Alternatives and similar repositories for transformer-implementation
Users that are interested in transformer-implementation are comparing it to the libraries listed below
Sorting:
- NLP Examples using the π€ librariesβ40Updated 4 years ago
- A π€-style implementation of BERT using lambda layers instead of self-attentionβ69Updated 5 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.β147Updated 4 years ago
- On Generating Extended Summaries of Long Documentsβ78Updated 4 years ago
- Visualising the Transformer encoderβ111Updated 5 years ago
- β46Updated 5 years ago
- Accelerated NLP pipelines for fast inference on CPU. Built with Transformers and ONNX runtime.β127Updated 5 years ago
- TorchServe+Streamlit for easily serving your HuggingFace NER modelsβ33Updated 3 years ago
- Code for the paper: Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queriesβ19Updated 4 years ago
- A small, interpretable codebase containing the re-implementation of a few "deep" NLP models in PyTorch. Colab notebooks to run with GPUs.β¦β75Updated 5 years ago
- This is the second part of the Deep Learning Course for the Master in High-Performance Computing (SISSA/ICTP).)β33Updated 5 years ago
- http://nlp.seas.harvard.edu/2018/04/03/attention.htmlβ62Updated 4 years ago
- State of the art faster Transformer with Tensorflow 2.0 ( NLP, Computer Vision, Audio ).β85Updated 2 years ago
- Implementation, trained models and result data for the paper "Aspect-based Document Similarity for Research Papers" #COLING2020β63Updated last year
- Low-code pre-built pipelines for experiments with huggingface/transformers for Data Scientists in a rush.β16Updated 5 years ago
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."β132Updated 4 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.β105Updated 3 years ago
- Use fastai-v2 with HuggingFace's pretrained transformersβ110Updated 5 years ago
- Distillation of BERT model with catalyst frameworkβ78Updated 2 years ago
- An easy-to-use Python module that helps you to extract the BERT embeddings for a large text dataset (Bengali/English) efficiently.β36Updated 2 years ago
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+β37Updated 4 years ago
- The "tl;dr" on a few notable transformer papers (pre-2022).β189Updated 3 years ago
- Google's BigBird (Jax/Flax & PyTorch) @ π€Transformersβ49Updated 2 years ago
- What are the best Systems? New Perspectives on NLP Benchmarkingβ13Updated 2 years ago
- π οΈ Tools for Transformers compression using PyTorch Lightning β‘β85Updated last week
- PyTorch implementation of GLOMβ23Updated 3 years ago
- A deep learning library based on Pytorch focussed on low resource language research and robustnessβ70Updated 4 years ago
- Multitask Learning with Pretrained Transformersβ39Updated 4 years ago
- KitanaQA: Adversarial training and data augmentation for neural question-answering modelsβ56Updated 2 years ago
- Create interactive textual heat maps for Jupiter notebooksβ196Updated last year