nisten / grokadamwLinks
new optimizer
☆20Updated last year
Alternatives and similar repositories for grokadamw
Users that are interested in grokadamw are comparing it to the libraries listed below
Sorting:
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆99Updated 6 months ago
- ☆55Updated last year
- Latent Large Language Models☆19Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆58Updated last month
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Collection of autoregressive model implementation☆86Updated 7 months ago
- ☆136Updated last year
- ☆40Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆37Updated last month
- RWKV-7: Surpassing GPT☆101Updated last year
- ☆39Updated last year
- A repository for research on medium sized language models.☆78Updated last year
- ☆52Updated last year
- Simple GRPO scripts and configurations.☆59Updated 9 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 9 months ago
- ☆50Updated last year
- ☆58Updated last week
- ☆53Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- Official repo for Learning to Reason for Long-Form Story Generation☆72Updated 7 months ago
- NanoGPT (124M) quality in 2.67B tokens☆28Updated 2 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Set of scripts to finetune LLMs☆38Updated last year
- entropix style sampling + GUI☆27Updated last year
- ☆67Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated 4 months ago