Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorch
β548May 16, 2025Updated 11 months ago
Alternatives and similar repositories for ring-attention-pytorch
Users that are interested in ring-attention-pytorch are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Ring attention implementation with flash attentionβ1,006Sep 10, 2025Updated 7 months ago
- Large Context Attentionβ770Oct 13, 2025Updated 6 months ago
- ring-attention experimentsβ165Oct 17, 2024Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inferenceβ664Jan 15, 2026Updated 3 months ago
- Implementation of Infini-Transformer in Pytorchβ112Jan 4, 2025Updated last year
- AI Agents on DigitalOcean Gradient AI Platform β’ AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Trainingβ222Aug 19, 2024Updated last year
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)β55Mar 25, 2025Updated last year
- Explorations into the recently proposed Taylor Series Linear Attentionβ100Aug 18, 2024Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorchβ422Jan 6, 2025Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.β755Sep 27, 2024Updated last year
- A PyTorch native platform for training generative AI modelsβ5,242Updated this week
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorchβ136Oct 15, 2025Updated 6 months ago
- π Efficient implementations for emerging model architecturesβ4,878Updated this week
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"β104Dec 22, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"β251Jun 6, 2025Updated 10 months ago
- β46Nov 10, 2023Updated 2 years ago
- Tile primitives for speedy kernelsβ3,312Apr 8, 2026Updated last week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on Hβ¦β3,280Updated this week
- Efficient Triton Kernels for LLM Trainingβ6,279Updated this week
- Triton-based implementation of Sparse Mixture of Experts.β274Oct 3, 2025Updated 6 months ago
- FlashInfer: Kernel Library for LLM Servingβ5,372Apr 11, 2026Updated last week
- YaRN: Efficient Context Window Extension of Large Language Models