kailums / flash-attention-rocmLinks
Fast and memory-efficient exact attention ported to rocm
☆13Updated 2 years ago
Alternatives and similar repositories for flash-attention-rocm
Users that are interested in flash-attention-rocm are comparing it to the libraries listed below
Sorting:
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 6 months ago
- ☆92Updated last month
- A collection of reproducible inference engine benchmarks☆38Updated 9 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated last week
- ☆78Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- Make triton easier☆50Updated last year
- ☆43Updated 4 months ago
- code for paper "Accessing higher dimensions for unsupervised word translation"☆22Updated 2 years ago
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆28Updated this week
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆56Updated 2 years ago
- ☆26Updated 2 years ago
- Tooling for exact and MinHash deduplication of large-scale text datasets☆56Updated last week
- Repository for CPU Kernel Generation for LLM Inference☆27Updated 2 years ago
- ☆39Updated last year
- Linear Attention Sequence Parallelism (LASP)☆88Updated last year
- ☆45Updated 5 months ago
- ☆51Updated 3 months ago
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆45Updated last year
- ☆17Updated last year
- Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification☆11Updated 2 years ago
- Evaluation of bm42 sparse indexing algorithm☆72Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated last year
- A memory efficient DLRM training solution using ColossalAI☆105Updated 3 years ago
- Multi-Layer Key-Value sharing experiments on Pythia models☆34Updated last year
- GoldFinch and other hybrid transformer components☆45Updated last year
- 🚀 Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)☆25Updated 2 years ago
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated last year