hyhieu / easy_pybindLinks
☆32Updated last year
Alternatives and similar repositories for easy_pybind
Users that are interested in easy_pybind are comparing it to the libraries listed below
Sorting:
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- Fast and memory efficient PyTorch implementation of the Perceiver with FlashAttention.☆31Updated last year
- Experimental GPU language with meta-programming☆24Updated last year
- ☆91Updated last year
- ☆92Updated last year
- Flash Attention Triton kernel with support for second-order derivatives☆129Updated 2 weeks ago
- ☆263Updated 7 months ago
- ☆53Updated last year
- ☆27Updated 3 months ago
- Learning about CUDA by writing PTX code.☆151Updated last year
- Supporting code for the blog post on modular manifolds.☆109Updated 3 months ago
- Personal solutions to the Triton Puzzles☆20Updated last year
- ☆23Updated 8 months ago
- train with kittens!☆63Updated last year
- Experiment of using Tangent to autodiff triton☆81Updated last year
- FlashRNN - Fast RNN Kernels with I/O Awareness☆174Updated 2 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- H-Net Dynamic Hierarchical Architecture☆80Updated 3 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- Implementation of the proposed Spline-Based Transformer from Disney Research☆105Updated last year
- ☆178Updated last year
- ☆43Updated 2 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆100Updated last year
- My attempt to improve the speed of the newton schulz algorithm, starting from the dion implementation.☆25Updated last month
- JAX bindings for Flash Attention v2☆102Updated last week
- Quantized LLM training in pure CUDA/C++.☆230Updated this week
- σ-GPT: A New Approach to Autoregressive Models☆70Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆108Updated 2 months ago
- Focused on fast experimentation and simplicity☆79Updated last year