hazdzz / tiger
A Tight-fisted Optimizer (Tiger), implemented in PyTorch.
☆11Updated 10 months ago
Alternatives and similar repositories for tiger:
Users that are interested in tiger are comparing it to the libraries listed below
- A Tight-fisted Optimizer☆47Updated 2 years ago
- Lion and Adam optimization comparison☆61Updated 2 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆122Updated last year
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated last year
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆85Updated 2 years ago
- ☆103Updated last year
- [EMNLP 2022] Differentiable Data Augmentation for Contrastive Sentence Representation Learning. https://arxiv.org/abs/2210.16536☆40Updated 2 years ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆97Updated 2 years ago
- Code for preprint "Metadata Conditioning Accelerates Language Model Pre-training (MeCo)"☆37Updated this week
- Code for paper "Patch-Level Training for Large Language Models"☆84Updated 5 months ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆60Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆51Updated 2 years ago
- Plug-and-Play Document Modules for Pre-trained Models☆26Updated last year
- Code for paper: A Neural Span-Based Continual Named Entity Recognition Model☆16Updated last year
- ☆14Updated last year
- Implementation of "Decoding-time Realignment of Language Models", ICML 2024.☆18Updated 10 months ago
- Mixture of Attention Heads☆44Updated 2 years ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated last year
- Official code for ICLR 2022 paper: "PoNet: Pooling Network for Efficient Token Mixing in Long Sequences".☆32Updated last year
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated 11 months ago
- Official implementation for "Parameter-Efficient Fine-Tuning Design Spaces"☆26Updated 2 years ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- ICLR2023 - Tailoring Language Generation Models under Total Variation Distance☆21Updated 2 years ago
- Transformers at any scale☆41Updated last year
- The code of paper "Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation" published at NeurIPS 202…☆46Updated 2 years ago
- Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method ; GKD: A General Knowledge Distillation…☆32Updated last year
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆40Updated 2 years ago
- ☆19Updated 2 years ago
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆42Updated 5 months ago