kailums / flash-attention-rocmLinks
Fast and memory-efficient exact attention ported to rocm
☆11Updated last year
Alternatives and similar repositories for flash-attention-rocm
Users that are interested in flash-attention-rocm are comparing it to the libraries listed below
Sorting:
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 4 months ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- ☆78Updated last year
- ☆12Updated 6 months ago
- Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification☆11Updated 2 years ago
- A collection of reproducible inference engine benchmarks☆37Updated 7 months ago
- Linear Attention Sequence Parallelism (LASP)☆87Updated last year
- ☆36Updated 3 months ago
- code for paper "Accessing higher dimensions for unsupervised word translation"☆22Updated 2 years ago
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated last week
- ☆26Updated 2 years ago
- OLMost every training recipe you need to perform data interventions with the OLMo family of models.☆56Updated this week
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Updated last year
- 🚀 Automatically convert unstructured data into a high-quality 'textbook' format, optimized for fine-tuning Large Language Models (LLMs)☆25Updated 2 years ago
- A memory efficient DLRM training solution using ColossalAI☆106Updated 3 years ago
- Data preparation code for Amber 7B LLM☆93Updated last year
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆57Updated this week
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AI☆56Updated 2 years ago
- ☆31Updated last year
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated this week
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆29Updated last week
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- NeurIPS 2023 - Cappy: Outperforming and Boosting Large Multi-Task LMs with a Small Scorer☆44Updated last year
- ☆71Updated 8 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- Intel Gaudi's Megatron DeepSpeed Large Language Models for training☆15Updated 11 months ago
- Train, tune, and infer Bamba model☆136Updated 5 months ago
- The open source implementation of "Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers"☆19Updated last year