An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"
☆1,214Oct 22, 2023Updated 2 years ago
Alternatives and similar repositories for RetNet
Users that are interested in RetNet are comparing it to the libraries listed below
Sorting:
- Huggingface compatible implementation of RetNet (Retentive Networks, https://arxiv.org/pdf/2307.08621.pdf) including parallel, recurrent,…☆226Mar 12, 2024Updated 2 years ago
- A simple but robust PyTorch implementation of RetNet from "Retentive Network: A Successor to Transformer for Large Language Models" (http…☆106Nov 24, 2023Updated 2 years ago
- Foundation Architecture for (M)LLMs☆3,135Apr 11, 2024Updated last year
- PyTorch implementation of Retentive Network: A Successor to Transformer for Large Language Models☆14Jul 20, 2023Updated 2 years ago
- ☆14Jul 26, 2023Updated 2 years ago
- (CVPR2024)RMT: Retentive Networks Meet Vision Transformer☆384Jul 29, 2024Updated last year
- an implementation of paper"Retentive Network: A Successor to Transformer for Large Language Models" https://arxiv.org/pdf/2307.08621.pdf☆11Jul 25, 2023Updated 2 years ago
- Mamba SSM architecture☆17,524Updated this week
- ☆33Jan 9, 2024Updated 2 years ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,046Jan 23, 2026Updated last month
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,419Mar 5, 2026Updated 2 weeks ago
- Running Llama 2 and other Open-Source LLMs on CPU Inference Locally for Document Q&A☆974Nov 6, 2023Updated 2 years ago
- Meta-Transformer for Unified Multimodal Learning☆1,655Dec 5, 2023Updated 2 years ago
- Fast and memory-efficient exact attention☆22,832Updated this week
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,929Mar 8, 2024Updated 2 years ago
- Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"☆714Jan 7, 2024Updated 2 years ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Dec 28, 2024Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,201Jul 11, 2024Updated last year
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆8,433May 31, 2024Updated last year
- An ODE-based generative neural vocoder using Rectified Flow☆58Apr 29, 2023Updated 2 years ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,911May 3, 2024Updated last year
- ☆24Sep 25, 2024Updated last year
- Kolmogorov Arnold Networks☆16,211Jan 19, 2025Updated last year
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆1,900Feb 6, 2026Updated last month
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,630Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,850Jun 10, 2024Updated last year
- Large World Model -- Modeling Text and Video with Millions Context☆7,402Oct 19, 2024Updated last year
- ☆51Jan 28, 2024Updated 2 years ago
- An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).☆4,593Aug 1, 2024Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,809Updated this week
- Official Pytorch Implementation for "TokenFlow: Consistent Diffusion Features for Consistent Video Editing" presenting "TokenFlow" (ICLR …☆1,706Feb 3, 2025Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆736Apr 10, 2024Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,578Aug 12, 2024Updated last year
- Transformer related optimization, including BERT, GPT☆6,397Mar 27, 2024Updated last year
- [ICML 2024] Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model☆3,820Feb 13, 2025Updated last year
- Implementation of Retention-Network in PyTorch☆17Aug 12, 2023Updated 2 years ago
- A PyTorch native platform for training generative AI models☆5,162Updated this week
- Run any Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Use `llama2-wrapper` as your local llama2 backend…☆1,944Mar 22, 2024Updated last year
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,373Updated this week