kailums / flash-attention-rocmLinks
Fast and memory-efficient exact attention ported to rocm
☆11Updated last year
Alternatives and similar repositories for flash-attention-rocm
Users that are interested in flash-attention-rocm are comparing it to the libraries listed below
Sorting:
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 2 months ago
- ☆11Updated 4 months ago
- A collection of reproducible inference engine benchmarks☆33Updated 5 months ago
- Linear Attention Sequence Parallelism (LASP)☆87Updated last year
- ☆26Updated 2 years ago
- A memory efficient DLRM training solution using ColossalAI☆106Updated 2 years ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- ☆78Updated 10 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- Make triton easier☆47Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆39Updated 10 months ago
- ☆25Updated last month
- A library for simplifying fine tuning with multi gpu setups in the Huggingface ecosystem.☆16Updated 11 months ago
- minimal scripts for 24GB VRAM GPUs. training, inference, whatever☆42Updated 2 weeks ago
- ☆17Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated 2 weeks ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated 2 years ago
- ☆57Updated last year
- Implementation of the model: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" in PyTorch☆28Updated this week
- ☆39Updated last year
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- benchmarking some transformer deployments☆26Updated 2 years ago
- Evaluation of bm42 sparse indexing algorithm☆68Updated last year
- setup the env for vllm users☆16Updated last year
- ☆21Updated last year
- Contextual Position Encoding but with some custom CUDA Kernels https://arxiv.org/abs/2405.18719☆22Updated last year
- Using FlexAttention to compute attention with different masking patterns☆45Updated last year
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated 2 weeks ago
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated 2 weeks ago