muellerzr / import-timerLinks
Pragmatic approach to parsing import profiles for CI's
☆11Updated last year
Alternatives and similar repositories for import-timer
Users that are interested in import-timer are comparing it to the libraries listed below
Sorting:
- Fast, Modern, and Low Precision PyTorch Optimizers☆108Updated 3 weeks ago
- ☆87Updated last year
- Load compute kernels from the Hub☆244Updated this week
- Experiment of using Tangent to autodiff triton☆80Updated last year
- Various transformers for FSDP research☆38Updated 2 years ago
- Implementation of Flash Attention in Jax☆216Updated last year
- Hugging Face Jobs☆19Updated last month
- ☆19Updated 2 years ago
- An implementation of the Llama architecture, to instruct and delight☆21Updated 2 months ago
- ☆20Updated 2 years ago
- ☆118Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- Serialize JAX, Flax, Haiku, or Objax model params with 🤗`safetensors`☆45Updated last year
- 👷 Build compute kernels☆106Updated last week
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆131Updated last year
- A library for unit scaling in PyTorch☆129Updated last month
- JAX implementation of the Llama 2 model☆219Updated last year
- ☆67Updated 3 years ago
- Google TPU optimizations for transformers models☆118Updated 7 months ago
- supporting pytorch FSDP for optimizers☆84Updated 8 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆60Updated 2 weeks ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- JAX Synergistic Memory Inspector☆178Updated last year
- Scalable and Performant Data Loading☆291Updated this week
- ☆88Updated last year
- PyTorch centric eager mode debugger☆47Updated 8 months ago
- ☆43Updated last week
- ☆61Updated 3 years ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆197Updated last year