A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (https://arxiv.org/pdf/2307.08621.pdf)
☆106Nov 24, 2023Updated 2 years ago
Alternatives and similar repositories for yet-another-retnet
Users that are interested in yet-another-retnet are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Huggingface compatible implementation of RetNet (Retentive Networks, https://arxiv.org/pdf/2307.08621.pdf) including parallel, recurrent,…☆227Mar 12, 2024Updated 2 years ago
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,214Oct 22, 2023Updated 2 years ago
- Template repo for Python projects, especially those focusing on machine learning and/or deep learning.☆15Jan 14, 2026Updated 2 months ago
- an implementation of paper"Retentive Network: A Successor to Transformer for Large Language Models" https://arxiv.org/pdf/2307.08621.pdf☆11Jul 25, 2023Updated 2 years ago
- (Unofficial) Implementation of dilated attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens" (https://arxiv.org/abs/2307…☆52Aug 7, 2023Updated 2 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- PyTorch implementation of Retentive Network: A Successor to Transformer for Large Language Models☆14Jul 20, 2023Updated 2 years ago
- ☆33Jan 9, 2024Updated 2 years ago
- [EMNLP 2023] Official implementation of the algorithm ETSC: Exact Toeplitz-to-SSM Conversion our EMNLP 2023 paper - Accelerating Toeplitz…☆14Oct 17, 2023Updated 2 years ago
- ☆16Mar 13, 2023Updated 3 years ago
- Code for the paper: https://arxiv.org/pdf/2309.06979.pdf☆21Jul 29, 2024Updated last year
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆57Dec 4, 2024Updated last year
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated last year
- VQ-TR repository☆12Apr 18, 2024Updated last year
- ☆29Jul 9, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Scalable Generation of Spatial Transcriptomics from Histology Images via Whole-Slide Flow Matching, ICML2025 (Spotlight)☆30Aug 11, 2025Updated 7 months ago
- A large-scale RWKV v7(World, PRWKV, Hybrid-RWKV) inference. Capable of inference by combining multiple states(Pseudo MoE). Easy to deploy…☆48Oct 21, 2025Updated 5 months ago
- ☆51Jan 28, 2024Updated 2 years ago
- (CVPR2024)RMT: Retentive Networks Meet Vision Transformer☆384Jul 29, 2024Updated last year
- Expanding linear RNN state-transition matrix eigenvalues to include negatives improves state-tracking tasks and language modeling without…☆21Mar 15, 2025Updated last year
- Explorations into the recently proposed Taylor Series Linear Attention☆100Aug 18, 2024Updated last year
- Foundation Architecture for (M)LLMs☆3,137Apr 11, 2024Updated last year
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆190May 9, 2024Updated last year
- pip install continualcode☆37Feb 10, 2026Updated last month
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [EMNLP 2023]Context Compression for Auto-regressive Transformers with Sentinel Tokens☆25Nov 6, 2023Updated 2 years ago
- Implementations of various linear RNN layers using pytorch and triton☆55Aug 4, 2023Updated 2 years ago
- Implementation of Retention-Network in PyTorch☆17Aug 12, 2023Updated 2 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- A Transformer Framework Based Couplet Task☆24Oct 29, 2023Updated 2 years ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆83Oct 5, 2023Updated 2 years ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆56Mar 19, 2026Updated last week
- Unofficial implementation of paper : Exploring the Space of Key-Value-Query Models with Intention☆12May 24, 2023Updated 2 years ago
- ☆14Mar 25, 2023Updated 3 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- ☆22Nov 9, 2024Updated last year
- Transformers at any scale☆42Jan 18, 2024Updated 2 years ago
- 모두의 말뭉치 데이터를 분석에 편리한 형태로 변환하는 기능을 제공합니다.☆11Mar 2, 2022Updated 4 years ago
- RWKV, in easy to read code☆73Mar 25, 2025Updated last year
- Open-sourcing code associated with the AAAI-25 paper "On the Expressiveness and Length Generalization of Selective State-Space Models on …☆16Sep 18, 2025Updated 6 months ago
- ☆10Nov 16, 2024Updated last year
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago