hazdzz / tigerLinks
A Tight-fisted Optimizer (Tiger), implemented in PyTorch.
☆12Updated last year
Alternatives and similar repositories for tiger
Users that are interested in tiger are comparing it to the libraries listed below
Sorting:
- A Tight-fisted Optimizer☆48Updated 2 years ago
- Lion and Adam optimization comparison☆61Updated 2 years ago
- ICLR2023 - Tailoring Language Generation Models under Total Variation Distance☆21Updated 2 years ago
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆21Updated last month
- ☆14Updated last year
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆60Updated last year
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆123Updated last year
- Code for paper: A Neural Span-Based Continual Named Entity Recognition Model☆16Updated last year
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated last year
- Mixture of Attention Heads☆47Updated 2 years ago
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- A collection of instruction data and scripts for machine translation.☆20Updated last year
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆39Updated last month
- ☆15Updated 8 months ago
- ☆18Updated last week
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆52Updated 2 years ago
- [ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)☆22Updated 2 years ago
- A repository for DenseSSMs☆87Updated last year
- 基于Gated Attention Unit的Transformer模型(尝鲜版)