duyvuleo / Transformer-DyNet
An Implementation of Transformer (Attention Is All You Need) in DyNet
☆63Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for Transformer-DyNet
- Decoding platform for machine translation research☆54Updated 5 years ago
- Dynet-based Biaffine Parser☆33Updated 5 years ago
- Implementation of "Arc-swift: A Novel Transition System for Dependency Parsing"☆32Updated 6 years ago
- Dynamic data selection for neural machine translation☆20Updated 6 years ago
- CytonMT: an Efficient Neural Machine Translation Open-source Toolkit Implemented in C++☆21Updated 6 years ago
- ☆43Updated 6 years ago
- Universal segmenter based on the Universal Dependency framework, written by Y. Shao, Uppsala University☆35Updated 5 years ago
- C++ code of "Tree-to-Sequence Attentional Neural Machine Translation (tree2seq ANMT)"☆57Updated 7 years ago
- Lexically constrained decoding for sequence generation using Grid Beam Search☆93Updated 6 years ago
- Python code for training models in the ACL paper, "Beyond BLEU:Training Neural Machine Translation with Semantic Similarity".☆52Updated 4 years ago
- Examples, tutorials and use cases for Marian, including our WMT-2017/18 baselines.☆78Updated last year
- Easy-first dependency parser based on Hierarchical Tree LSTMs☆33Updated 7 years ago
- Non-autoregressive Neural Machine Translation (not a full version)☆71Updated last year
- A minimal span-based neural constituency parser☆87Updated 6 years ago
- An updated version of the Parser-v1 repo, used for Stanford's submission in the CoNLL17 shared task.☆47Updated 6 years ago
- Preprossed data for workshop on statistical machine translation (WMT), collected from papers or other projects☆22Updated 7 years ago
- Source code for the paper "Multilingual Neural Machine Translation with Soft Decoupled Encoding"☆29Updated 3 years ago
- Reproduction instructions for "Rapid Adaptation of Neural Machine Translation to New Languages"☆39Updated 6 years ago
- souce code for "Accelerating Neural Transformer via an Average Attention Network"☆78Updated 5 years ago
- ☆48Updated 7 years ago
- Gaussian Mixture Latent Vector Grammars☆30Updated 5 years ago
- Document-Level Neural Machine Translation with Hierarchical Attention Networks☆68Updated 2 years ago
- Neutron: A pytorch based implementation of Transformer and its variants.☆63Updated last year
- Witwicky: An implementation of Transformer in PyTorch.☆22Updated 4 years ago
- Text classification code described in "SoPa: Bridging CNNs, RNNs, and Weighted Finite-State Machines" by Roy Schwartz, Sam Thomson and No…☆55Updated 2 years ago
- C++ code of "Learning to Parse and Translate Improves Neural Machine Translation"☆21Updated 7 years ago
- Baseline models, training scripts, and instructions on how to reproduce our results for our state-of-art grammar correction system from M…☆69Updated 5 years ago
- Uncovering divergent linguistic information in word embeddings with lessons for intrinsic and extrinsic evaluation☆63Updated 6 years ago
- Chinese word segmentation model with word-based character embeddings.☆12Updated 6 years ago
- Non-Monotonic Sequential Text Generation (ICML 2019)☆73Updated 5 years ago