divyamakkar0 / JAXformerLinks
A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.
☆104Updated last month
Alternatives and similar repositories for JAXformer
Users that are interested in JAXformer are comparing it to the libraries listed below
Sorting:
- seqax = sequence modeling + JAX☆168Updated 3 months ago
- ☆285Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 6 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 5 months ago
- ☆106Updated 3 weeks ago
- Simple Transformer in Jax☆139Updated last year
- Training-Ready RL Environments + Evals☆164Updated this week
- Minimal yet performant LLM examples in pure JAX☆198Updated last month
- SIMD quantization kernels☆92Updated 2 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆302Updated last week
- Dion optimizer algorithm☆383Updated this week
- ☆225Updated 3 weeks ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆107Updated 8 months ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆321Updated this week
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 7 months ago
- ☆143Updated 2 months ago
- rl from zero pretrain, can it be done? yes.☆280Updated last month
- ☆121Updated last week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆171Updated 4 months ago
- ☆91Updated last year
- Simple & Scalable Pretraining for Neural Architecture Research☆298Updated 2 weeks ago
- Quantized LLM training in pure CUDA/C++.☆215Updated this week
- 🧱 Modula software package☆300Updated 2 months ago
- A puzzle to learn about prompting☆135Updated 2 years ago
- ☆28Updated last year
- ☆232Updated 4 months ago
- Cost aware hyperparameter tuning algorithm☆173Updated last year
- Solve puzzles. Learn CUDA.☆64Updated last year
- ☆525Updated 3 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year