hazdzz / tigerLinks
A Tight-fisted Optimizer (Tiger), implemented in PyTorch.
☆12Updated last year
Alternatives and similar repositories for tiger
Users that are interested in tiger are comparing it to the libraries listed below
Sorting:
- A Tight-fisted Optimizer☆50Updated 2 years ago
- Lion and Adam optimization comparison☆64Updated 2 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆128Updated last year
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Updated 2 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆64Updated 2 years ago
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated 2 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- ICLR2023 - Tailoring Language Generation Models under Total Variation Distance☆21Updated 2 years ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- FLASHQuad_pytorch☆68Updated 3 years ago
- ☆107Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated 2 years ago
- Official code for ICLR 2022 paper: "PoNet: Pooling Network for Efficient Token Mixing in Long Sequences".☆33Updated 2 years ago
- 基于Transformer的单模型、多尺度的VAE模型☆58Updated 4 years ago
- A repository for DenseSSMs☆88Updated last year
- huggingface ChineseBert Tokenizer☆16Updated 3 years ago
- [EMNLP 2022] Differentiable Data Augmentation for Contrastive Sentence Representation Learning. https://arxiv.org/abs/2210.16536☆40Updated 3 years ago
- ☆90Updated 8 months ago
- ☆22Updated last year
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆43Updated last year
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆112Updated 3 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆28Updated 8 months ago
- ☆201Updated 2 years ago
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- Pytorch implementation of "Block Recurrent Transformers" (Hutchins & Schlag et al., 2022)☆85Updated 3 years ago
- AAAI2024 Global Competition on Math Problem Solving and Reasoning☆14Updated 2 years ago
- Code for ICML 25 paper "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆49Updated 6 months ago
- Code for paper: A Neural Span-Based Continual Named Entity Recognition Model☆18Updated 2 years ago