nengwp / Lion-vs-Adam
Lion and Adam optimization comparison
☆56Updated last year
Alternatives and similar repositories for Lion-vs-Adam:
Users that are interested in Lion-vs-Adam are comparing it to the libraries listed below
- A Tight-fisted Optimizer☆47Updated last year
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆97Updated last year
- A Tight-fisted Optimizer (Tiger), implemented in PyTorch.☆11Updated 7 months ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆118Updated 10 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆96Updated 3 months ago
- ☆13Updated last year
- Converting Mixtral-8x7B to Mixtral-[1~7]x7B☆20Updated 10 months ago
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆55Updated 9 months ago
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆50Updated 11 months ago
- ☆171Updated 3 months ago
- Research without Re-search: Maximal Update Parametrization Yields Accurate Loss Prediction across Scales☆31Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆19Updated last year
- A prototype repo for hybrid training of pipeline parallel and distributed data parallel with comments on core code snippets. Feel free to…☆53Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆70Updated last year
- A more efficient GLM implementation!☆55Updated last year
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated 10 months ago
- Ladder Side-Tuning在CLUE上的简单尝试☆19Updated 2 years ago
- ☆47Updated last week
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- ☆94Updated 4 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆121Updated 3 weeks ago
- ☆162Updated 6 months ago
- NTK scaled version of ALiBi position encoding in Transformer.☆67Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆77Updated 2 months ago
- Mixture of Attention Heads☆41Updated 2 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆80Updated last year
- Low-bit optimizers for PyTorch☆125Updated last year
- This is a repo for showcasing using MCTS with LLMs to solve gsm8k problems☆46Updated 2 weeks ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆99Updated 7 months ago
- Official completion of “Training on the Benchmark Is Not All You Need”.☆28Updated last month