nengwp / Lion-vs-AdamLinks
Lion and Adam optimization comparison
☆64Updated 2 years ago
Alternatives and similar repositories for Lion-vs-Adam
Users that are interested in Lion-vs-Adam are comparing it to the libraries listed below
Sorting:
- A Tight-fisted Optimizer☆50Updated 2 years ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Updated 2 years ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆246Updated last year
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated 2 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆125Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 8 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- Rectified Rotary Position Embeddings☆381Updated last year
- Converting Mixtral-8x7B to Mixtral-[1~7]x7B☆22Updated last year
- ☆209Updated 10 months ago
- Low-bit optimizers for PyTorch☆131Updated last year
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆86Updated 2 years ago
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆56Updated 2 years ago
- ☆106Updated last year
- ☆114Updated last year
- ☆197Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆102Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- Implementation of "Attention Is Off By One" by Evan Miller☆196Updated 2 years ago
- ☆14Updated last year
- A Tight-fisted Optimizer (Tiger), implemented in PyTorch.☆12Updated last year
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆62Updated 2 years ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆179Updated last year
- ☆104Updated 2 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆328Updated 6 months ago
- Official completion of “Training on the Benchmark Is Not All You Need”.☆35Updated 8 months ago
- NTK scaled version of ALiBi position encoding in Transformer.☆69Updated 2 years ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆131Updated 5 months ago