☆176Feb 3, 2024Updated 2 years ago
Alternatives and similar repositories for profiling-cuda-in-torch
Users that are interested in profiling-cuda-in-torch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- GPU programming related news and material links☆2,060Mar 8, 2026Updated 2 weeks ago
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- Solve puzzles. Learn CUDA.☆62Dec 13, 2023Updated 2 years ago
- PyTorch implementation for PaLM: A Hybrid Parser and Language Model.☆10Jan 7, 2020Updated 6 years ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73May 26, 2024Updated last year
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- ☆19Dec 4, 2025Updated 3 months ago
- Advanced Formal Language Theory (263-5352-00L; Frühjahr 2023)☆10Feb 21, 2023Updated 3 years ago
- ☆307Mar 16, 2026Updated last week
- A place to store reusable transformer components of my own creation or found on the interwebs☆75Updated this week
- ring-attention experiments☆168Oct 17, 2024Updated last year
- Utilities for PyTorch distributed☆25Feb 27, 2025Updated last year
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- Triton Implementation of HyperAttention Algorithm☆48Dec 11, 2023Updated 2 years ago
- ☆14Nov 20, 2022Updated 3 years ago
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆14Aug 25, 2023Updated 2 years ago
- Official Repository for Efficient Linear-Time Attention Transformers.☆18Jun 2, 2024Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Oct 5, 2024Updated last year
- Recursive Bayesian Networks☆11May 11, 2025Updated 10 months ago
- ☆18Apr 3, 2023Updated 2 years ago
- Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification☆11Aug 12, 2023Updated 2 years ago
- Material for gpu-mode lectures☆5,865Feb 1, 2026Updated last month
- ☆20May 30, 2024Updated last year
- Puzzles for learning Triton☆2,336Updated this week
- source code for NAACL2022 main conference "Dynamic Programming in Rank Space: Scaling Structured Inference with Low-Rank HMMs and PCFGs"☆10Sep 26, 2022Updated 3 years ago
- ☆17Dec 19, 2024Updated last year
- Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)☆24Jun 6, 2024Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- ☆13Feb 7, 2023Updated 3 years ago
- Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops☆30Mar 16, 2024Updated 2 years ago
- Utilities for Training Very Large Models☆58Sep 25, 2024Updated last year
- Experiment of using Tangent to autodiff triton☆82Jan 22, 2024Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Jan 4, 2024Updated 2 years ago
- Tile primitives for speedy kernels☆3,244Updated this week
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32May 25, 2024Updated last year
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,098Dec 30, 2024Updated last year
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆30Updated this week