apple / ml-sigma-reparamLinks
☆307Updated last year
Alternatives and similar repositories for ml-sigma-reparam
Users that are interested in ml-sigma-reparam are comparing it to the libraries listed below
Sorting:
- For optimization algorithm research and development.☆530Updated this week
- JAX implementation of the Llama 2 model☆219Updated last year
- Annotated version of the Mamba paper☆487Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆130Updated last year
- WIP☆94Updated last year
- Efficient optimizers☆254Updated 2 weeks ago
- A repository for log-time feedforward networks☆223Updated last year
- Scalable and Performant Data Loading☆291Updated last week
- ☆87Updated last year
- The AdEMAMix Optimizer: Better, Faster, Older.☆184Updated 11 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆152Updated last month
- supporting pytorch FSDP for optimizers☆84Updated 8 months ago
- Fast bare-bones BPE for modern tokenizer training☆164Updated 2 months ago
- ☆275Updated last year
- Understand and test language model architectures on synthetic tasks.☆221Updated last month
- Implementation of the Llama architecture with RLHF + Q-learning☆166Updated 6 months ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆220Updated last year
- git extension for {collaborative, communal, continual} model development☆217Updated 9 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆240Updated 2 months ago
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆158Updated last year
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆124Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆413Updated 7 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆229Updated 11 months ago
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆290Updated 11 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆191Updated last year
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆379Updated last year
- Getting crystal-like representations with harmonic loss☆194Updated 4 months ago
- Language Modeling with the H3 State Space Model☆519Updated last year
- ☆166Updated 2 years ago