modula-systems / modulaLinks
π§± Modula software package
β277Updated last month
Alternatives and similar repositories for modula
Users that are interested in modula are comparing it to the libraries listed below
Sorting:
- β281Updated last year
- Efficient optimizersβ265Updated last week
- Minimal yet performant LLM examples in pure JAXβ177Updated last week
- β215Updated 10 months ago
- supporting pytorch FSDP for optimizersβ84Updated 9 months ago
- seqax = sequence modeling + JAXβ167Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.β164Updated 3 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jaxβ667Updated this week
- A simple library for scaling up JAX programsβ143Updated 11 months ago
- Accelerated First Order Parallel Associative Scanβ188Updated last year
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation preconditionβ¦β183Updated this week
- Named Tensors for Legible Deep Learning in JAXβ205Updated last week
- LoRA for arbitrary JAX models and functionsβ142Updated last year
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvementβ¦β398Updated last week
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 secondsβ301Updated 2 months ago
- A library for unit scaling in PyTorchβ130Updated 2 months ago
- Cost aware hyperparameter tuning algorithmβ168Updated last year
- A MAD laboratory to improve AI architecture designs π§ͺβ129Updated 9 months ago
- Universal Notation for Tensor Operations in Python.β434Updated 5 months ago
- Dion optimizer algorithmβ360Updated this week
- JAX Synergistic Memory Inspectorβ180Updated last year
- An implementation of PSGD Kron second-order optimizer for PyTorchβ97Updated 2 months ago
- β120Updated 3 months ago
- nanoGPT-like codebase for LLM trainingβ107Updated 4 months ago
- Implementation of Diffusion Transformer (DiT) in JAXβ291Updated last year
- Minimal but scalable implementation of large language models in JAXβ35Updated last month
- For optimization algorithm research and development.β539Updated this week
- Accelerate, Optimize performance with streamlined training and serving options with JAX.β311Updated this week
- β233Updated 7 months ago
- Understand and test language model architectures on synthetic tasks.β226Updated last week