nengwp / Lion-vs-Adam
Lion and Adam optimization comparison
☆60Updated 2 years ago
Alternatives and similar repositories for Lion-vs-Adam:
Users that are interested in Lion-vs-Adam are comparing it to the libraries listed below
- A Tight-fisted Optimizer☆47Updated 2 years ago
- ☆181Updated 5 months ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆97Updated 2 years ago
- A Tight-fisted Optimizer (Tiger), implemented in PyTorch.☆11Updated 8 months ago
- Converting Mixtral-8x7B to Mixtral-[1~7]x7B☆22Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆56Updated 11 months ago
- Ladder Side-Tuning在CLUE上的简单尝试☆19Updated 2 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆119Updated last year
- Low-bit optimizers for PyTorch☆125Updated last year
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆32Updated last year
- ☆170Updated 8 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 2 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆94Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆104Updated last week
- A more efficient GLM implementation!☆55Updated 2 years ago
- ☆100Updated 8 months ago
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆55Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated last year
- 😎 A simple and easy-to-use toolkit for GPU scheduling.☆42Updated 3 years ago
- ☆15Updated 11 months ago
- ☆98Updated 5 months ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆82Updated 2 years ago
- NTK scaled version of ALiBi position encoding in Transformer.☆66Updated last year
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆130Updated last month
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆54Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- Official completion of “Training on the Benchmark Is Not All You Need”.☆29Updated 2 months ago