BryceZhuo / PolyComLinks
The official implementation of ICLR 2025 paper "Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models".
☆15Updated 6 months ago
Alternatives and similar repositories for PolyCom
Users that are interested in PolyCom are comparing it to the libraries listed below
Sorting:
- ☆50Updated 9 months ago
- ☆47Updated last year
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆31Updated 6 months ago
- Official implementation of ECCV24 paper: POA☆24Updated last year
- [COLM 2025] "C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing"☆18Updated 6 months ago
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆26Updated 3 months ago
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆16Updated 7 months ago
- PyTorch implementation of StableMask (ICML'24)☆14Updated last year
- ☆19Updated 9 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆39Updated last year
- ☆34Updated 7 months ago
- ☆17Updated 8 months ago
- This is a simple torch implementation of the high performance Multi-Query Attention☆15Updated 2 years ago
- A repository for DenseSSMs☆89Updated last year
- Official implementation of RMoE (Layerwise Recurrent Router for Mixture-of-Experts)☆24Updated last year
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆31Updated 6 months ago
- ☆16Updated 4 months ago
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated last year
- ☆35Updated 5 months ago
- Control LLM☆20Updated 6 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆109Updated this week
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆20Updated last year
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆36Updated last year
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆53Updated 7 months ago
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆27Updated 6 months ago
- [ICLR 2025] Official Code Release for Explaining Modern Gated-Linear RNNs via a Unified Implicit Attention Formulation☆47Updated 8 months ago
- [Technical Report] Official PyTorch implementation code for realizing the technical part of Phantom of Latent representing equipped with …☆61Updated last year
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆26Updated 11 months ago
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆34Updated last year
- ☆75Updated 8 months ago