nisten / grokadamwLinks
new optimizer
☆20Updated last year
Alternatives and similar repositories for grokadamw
Users that are interested in grokadamw are comparing it to the libraries listed below
Sorting:
- ☆134Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆55Updated 6 months ago
- Latent Large Language Models☆18Updated last year
- Collection of autoregressive model implementation☆86Updated 4 months ago
- ☆54Updated 9 months ago
- ☆49Updated last year
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- ☆38Updated last year
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆85Updated 3 months ago
- RWKV-7: Surpassing GPT☆94Updated 9 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- An introduction to LLM Sampling☆79Updated 8 months ago
- GoldFinch and other hybrid transformer components☆45Updated last year
- A repository for research on medium sized language models.☆78Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last month
- ☆39Updated last year
- Official repo for Learning to Reason for Long-Form Story Generation☆68Updated 4 months ago
- Simple GRPO scripts and configurations.☆59Updated 6 months ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆70Updated 4 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆109Updated 4 months ago
- Modeling code for a BitNet b1.58 Llama-style model.☆25Updated last year
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- Set of scripts to finetune LLMs☆37Updated last year
- ☆61Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 6 months ago
- ☆67Updated last year
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆64Updated 10 months ago
- Storing long contexts in tiny caches with self-study☆145Updated last week