☆29May 4, 2024Updated last year
Alternatives and similar repositories for nope_head_scale
Users that are interested in nope_head_scale are comparing it to the libraries listed below
Sorting:
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆64Jul 30, 2023Updated 2 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Jun 11, 2025Updated 8 months ago
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆56Dec 4, 2024Updated last year
- ☆16Dec 9, 2023Updated 2 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago
- [EMNLP 2023] Official implementation of the algorithm ETSC: Exact Toeplitz-to-SSM Conversion our EMNLP 2023 paper - Accelerating Toeplitz…☆14Oct 17, 2023Updated 2 years ago
- Some preliminary explorations of Mamba's context scaling.☆218Feb 8, 2024Updated 2 years ago
- Universal data IO and neural network modules in NLP tasks.☆18Jun 21, 2022Updated 3 years ago
- ☆16Mar 13, 2023Updated 2 years ago
- ☆62Jun 17, 2024Updated last year
- sigma-MoE layer☆21Jan 5, 2024Updated 2 years ago
- [ACL 24 Findings] Implementation of Resonance RoPE and the PosGen synthetic dataset.☆24Mar 5, 2024Updated 2 years ago
- ECNU NLP group learns CS224n in the form of seminars in the 2017 summer.☆10Aug 12, 2017Updated 8 years ago
- ☆15Mar 22, 2023Updated 2 years ago
- DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails☆31Feb 26, 2025Updated last year
- [ICCV2025] WikiAutoGen offical page☆24Feb 6, 2026Updated 3 weeks ago
- ☆15Dec 5, 2019Updated 6 years ago
- ☆22Dec 1, 2021Updated 4 years ago
- Official Repository for Efficient Linear-Time Attention Transformers.☆18Jun 2, 2024Updated last year
- Scripts for downloading and pre-processing the `proof-pile`, a high quality dataset of mathematical text and code.☆22Nov 26, 2022Updated 3 years ago
- ☆27Updated this week
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆247Sep 12, 2025Updated 5 months ago
- ☆96Oct 8, 2023Updated 2 years ago
- ☆29Jul 9, 2024Updated last year
- ☆106Mar 9, 2024Updated last year
- Flash Attention in 300-500 lines of CUDA/C++☆36Aug 22, 2025Updated 6 months ago
- This is the public github for our paper "Transformer with a Mixture of Gaussian Keys"☆28Aug 13, 2022Updated 3 years ago
- Software Engineering Back End Microservices Project☆15Nov 20, 2024Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆67Apr 24, 2024Updated last year
- An annotated implementation of the Hyena Hierarchy paper☆34May 28, 2023Updated 2 years ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆32Jan 23, 2025Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆168Nov 12, 2023Updated 2 years ago
- ☆36Feb 26, 2024Updated 2 years ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- A repository for research on medium sized language models.☆78May 23, 2024Updated last year
- ☆31Jul 2, 2023Updated 2 years ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Aug 6, 2023Updated 2 years ago
- Lottery Ticket Adaptation☆40Nov 20, 2024Updated last year