hazdzz / TigerLinks
A Tight-fisted Optimizer (Tiger), implemented in PyTorch.
☆12Updated last year
Alternatives and similar repositories for Tiger
Users that are interested in Tiger are comparing it to the libraries listed below
Sorting:
- Lion and Adam optimization comparison☆64Updated 2 years ago
- A Tight-fisted Optimizer☆50Updated 2 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆128Updated last year
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Updated 2 years ago
- Official code for ICLR 2022 paper: "PoNet: Pooling Network for Efficient Token Mixing in Long Sequences".☆33Updated 2 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆64Updated 2 years ago
- [EMNLP 2022] Differentiable Data Augmentation for Contrastive Sentence Representation Learning. https://arxiv.org/abs/2210.16536☆40Updated 3 years ago
- A repository for DenseSSMs☆88Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated 2 years ago
- Mixture of Attention Heads☆51Updated 3 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆58Updated 4 years ago
- huggingface ChineseBert Tokenizer☆16Updated 3 years ago
- ICLR2023 - Tailoring Language Generation Models under Total Variation Distance☆21Updated 3 years ago
- ☆106Updated last year
- ☆95Updated last year
- FLASHQuad_pytorch☆68Updated 3 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆113Updated 3 years ago
- ☆16Updated 11 months ago
- Code and data to accompany the camera-ready version of "Cross-Attention is All You Need: Adapting Pretrained Transformers for Machine Tra…☆33Updated 4 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆28Updated 9 months ago
- Tool for converting LLMs from uni-directional to bi-directional by removing causal mask for tasks like classification and sentence embedd…☆63Updated last year
- 记录Transformer升级的论文笔记☆19Updated 2 years ago
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆70Updated 4 years ago
- ACL 2022(findings): A Sentence is Worth 128 Pseudo Tokens: A Semantic-Aware Contrastive Learning Framework for Sentence Embeddings☆18Updated 3 years ago
- A *tuned* minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆120Updated 4 years ago
- Text Diffusion Model with Encoder-Decoder Transformers for Sequence-to-Sequence Generation [NAACL 2024]☆99Updated 2 years ago
- The code and dataset for "FastRE: Towards Fast Relation Extraction with Convolutional Encoder and Improved Cascade Binary Tagging Framewo…☆24Updated 3 years ago