amjadmajid / BabyTorchLinks
BabyTorch is a minimalist deep-learning framework with a similar API to PyTorch. This minimalist design encourages learners explore and understand the underlying algorithms and mechanics of deep learning processes. It is design such that when learners are ready to switch to PyTorch they only need to remove the word `baby`.
☆26Updated 6 months ago
Alternatives and similar repositories for BabyTorch
Users that are interested in BabyTorch are comparing it to the libraries listed below
Sorting:
- Implementation of Diffusion Transformer (DiT) in JAX☆298Updated last year
- Minimal yet performant LLM examples in pure JAX☆214Updated 2 weeks ago
- ☆285Updated last year
- Synchronized Curriculum Learning for RL Agents☆116Updated last month
- ☆122Updated 6 months ago
- ☆128Updated last week
- Minimal but scalable implementation of large language models in JAX☆35Updated 3 weeks ago
- Cost aware hyperparameter tuning algorithm☆176Updated last year
- Pytorch implementation of Evolutionary Policy Optimization, from Wang et al. of the Robotics Institute at Carnegie Mellon University☆102Updated 2 months ago
- A simple library for scaling up JAX programs☆144Updated last month
- Latent Program Network (from the "Searching Latent Program Spaces" paper)☆106Updated 3 weeks ago
- Solve puzzles. Learn CUDA.☆64Updated 2 years ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆334Updated last month
- Minimal, lightweight JAX implementations of popular models.☆171Updated last week
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Updated last year
- 🧱 Modula software package☆316Updated 4 months ago
- torchax is a PyTorch frontend for JAX. It gives JAX the ability to author JAX programs using familiar PyTorch syntax. It also provides JA…☆148Updated 2 weeks ago
- LoRA for arbitrary JAX models and functions☆143Updated last year
- Accelerated minigrid environments with JAX☆153Updated 2 months ago
- Efficient optimizers☆277Updated last month
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆37Updated 2 years ago
- Accelerate, Optimize performance with streamlined training and serving options with JAX.☆325Updated this week
- Flax (Jax) implementation of DeepSeek-R1-Distill-Qwen-1.5B with weights ported from Hugging Face.☆26Updated 10 months ago
- ☆210Updated last year
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆125Updated last year
- A projection-based framework for gradient-free and parallel learning☆108Updated 6 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆179Updated 5 months ago
- Custom triton kernels for training Karpathy's nanoGPT.☆19Updated last year
- ☆69Updated 2 years ago
- seqax = sequence modeling + JAX☆169Updated 4 months ago