huggingface / paper-style-guide
☆72Updated 3 years ago
Alternatives and similar repositories for paper-style-guide
Users that are interested in paper-style-guide are comparing it to the libraries listed below
Sorting:
- Figures I made during my PhD in Deep Learning, for my models and for context☆80Updated 3 years ago
- Outlining techniques for improving the training performance of your PyTorch model without compromising its accuracy☆129Updated 2 years ago
- Code for the anonymous submission "Cockpit: A Practical Debugging Tool for Training Deep Neural Networks"☆31Updated 4 years ago
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retain☆33Updated 4 years ago
- a lightweight transformer library for PyTorch☆71Updated 3 years ago
- This repo is for our paper: Normalization Techniques in Training DNNs: Methodology, Analysis and Application☆84Updated 3 years ago
- ☆23Updated 2 years ago
- Loss and accuracy go opposite ways...right?☆93Updated 5 years ago
- PDFs and Codelabs for the Efficient Deep Learning book.☆192Updated last year
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.☆70Updated 2 years ago
- Code for the paper "BERT Loses Patience: Fast and Robust Inference with Early Exit".☆65Updated 3 years ago
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆134Updated 3 years ago
- Deep learning project template with tensorflow & pytorch (multi-gpu version)☆38Updated 11 months ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 3 years ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆151Updated last year
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 3 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆82Updated 3 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 3 years ago
- The newest reading list for representation learning☆115Updated 4 years ago
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆57Updated last year
- Notebook for comprehensive analysis of authors, organizations, and countries of ICML 2020 papers.☆56Updated 4 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆60Updated 3 years ago
- Efficient Deep Learning Survey Paper☆33Updated 2 years ago
- Toy implementations of some popular ML optimizers using Python/JAX☆44Updated 3 years ago
- Notes of my introduction about NLP in Fudan University☆37Updated 3 years ago
- This project shows how to derive the total number of training tokens from a large text dataset from 🤗 datasets with Apache Beam and Data…☆26Updated 2 years ago
- Research boilerplate for PyTorch.☆149Updated last year
- 💪 A toolkit to help search for papers from aclanthology, arXiv and dblp.☆45Updated 2 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆74Updated 2 years ago