pytorch / maskedtensorLinks
MaskedTensors for PyTorch
☆38Updated 2 years ago
Alternatives and similar repositories for maskedtensor
Users that are interested in maskedtensor are comparing it to the libraries listed below
Sorting:
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆44Updated 2 years ago
- ☆29Updated 2 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆54Updated 2 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆100Updated 2 years ago
- gpu tester detects broken and slow gpus in a cluster☆70Updated 2 years ago
- See details in https://github.com/pytorch/xla/blob/r1.12/torch_xla/distributed/fsdp/README.md☆24Updated 2 years ago
- ☆33Updated 2 years ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- [TMLR 2022] Curvature access through the generalized Gauss-Newton's low-rank structure: Eigenvalues, eigenvectors, directional derivative…☆17Updated last year
- Implementation of LogAvgExp for Pytorch☆36Updated 2 months ago
- Experiment of using Tangent to autodiff triton☆79Updated last year
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆91Updated 3 years ago
- A small demonstration of using WebDataset with ImageNet and PyTorch Lightning☆74Updated last year
- ☆104Updated last year
- ☆60Updated 3 years ago
- ☆53Updated 8 months ago
- Code for the paper PermuteFormer☆42Updated 3 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- Parallel Associative Scan for Language Models☆18Updated last year
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- Blog post☆17Updated last year
- AdamW optimizer for bfloat16 models in pytorch 🔥.☆32Updated last year
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆46Updated last year
- A case study of efficient training of large language models using commodity hardware.☆69Updated 2 years ago
- FID computation in Jax/Flax.☆27Updated 11 months ago
- Very deep VAEs in JAX/Flax☆46Updated 4 years ago
- ☆31Updated last week
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- Amos optimizer with JEstimator lib.☆82Updated last year