muellerzr / import-timer
Pragmatic approach to parsing import profiles for CI's
☆11Updated 6 months ago
Alternatives and similar repositories for import-timer:
Users that are interested in import-timer are comparing it to the libraries listed below
- Learn CUDA with PyTorch☆14Updated 2 months ago
- ☆20Updated last year
- Various transformers for FSDP research☆34Updated 2 years ago
- Utilities for PyTorch distributed☆23Updated last year
- Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers☆77Updated 6 months ago
- ML/DL Math and Method notes☆57Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated this week
- Experiment of using Tangent to autodiff triton☆74Updated 11 months ago
- JAX Implementation of Black Forest Labs' Flux.1 family of models☆26Updated 2 months ago
- ☆75Updated 6 months ago
- ☆21Updated 2 months ago
- PyTorch centric eager mode debugger☆43Updated last month
- Serialize JAX, Flax, Haiku, or Objax model params with 🤗`safetensors`☆42Updated 7 months ago
- A library for squeakily cleaning and filtering language datasets.☆45Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆47Updated 11 months ago
- An unofficial Python client library for Lambda Lab's Cloud Computing Platform☆13Updated last year
- ☆20Updated 2 years ago
- A sample pattern for running CI tests on Modal☆14Updated 3 months ago
- Automatically take good care of your preemptible TPUs☆34Updated last year
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆14Updated 2 weeks ago
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Documentation Sprint for the fastai deep learning library☆15Updated 2 years ago
- A miniture AI training framework for PyTorch☆37Updated 3 weeks ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆44Updated 3 weeks ago
- Tools to make language models a bit easier to use☆32Updated last month
- ☆17Updated last year
- ☆16Updated 2 years ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆81Updated last year
- ☆83Updated 7 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated last month