nisten / grokadamwLinks
new optimizer
☆20Updated last year
Alternatives and similar repositories for grokadamw
Users that are interested in grokadamw are comparing it to the libraries listed below
Sorting:
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆103Updated 7 months ago
- Latent Large Language Models☆19Updated last year
- ☆55Updated last year
- ☆50Updated last year
- RWKV-7: Surpassing GPT☆101Updated last year
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆37Updated 2 months ago
- ☆59Updated last month
- Collection of autoregressive model implementation☆85Updated 7 months ago
- Aana SDK is a powerful framework for building AI enabled multimodal applications.☆54Updated 3 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 2 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated 2 years ago
- A repository for research on medium sized language models.☆77Updated last year
- ☆136Updated last year
- ☆39Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆111Updated 7 months ago
- Implementation of the Mamba SSM with hf_integration.☆56Updated last year
- GoldFinch and other hybrid transformer components☆45Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆66Updated last year
- ☆32Updated last year
- ☆62Updated 5 months ago
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated 2 weeks ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 7 months ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆18Updated 4 months ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆66Updated last month
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated 2 years ago