BryceZhuo / PolyComLinks
The official implementation of ICLR 2025 paper "Polynomial Composition Activations: Unleashing the Dynamics of Large Language Models".
☆15Updated 4 months ago
Alternatives and similar repositories for PolyCom
Users that are interested in PolyCom are comparing it to the libraries listed below
Sorting:
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆39Updated 11 months ago
- ☆48Updated 7 months ago
- Official implementation of ECCV24 paper: POA☆24Updated last year
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆26Updated last month
- A repository for DenseSSMs☆88Updated last year
- ☆18Updated 8 months ago
- This is a simple torch implementation of the high performance Multi-Query Attention☆16Updated 2 years ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆52Updated 6 months ago
- Control LLM☆19Updated 5 months ago
- Official PyTorch Implementation for Vision-Language Models Create Cross-Modal Task Representations, ICML 2025☆30Updated 4 months ago
- Code for "C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing"☆18Updated 5 months ago
- ☆35Updated 6 months ago
- PyTorch implementation of StableMask (ICML'24)☆14Updated last year
- Official implementation of RMoE (Layerwise Recurrent Router for Mixture-of-Experts)☆23Updated last year
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆31Updated 5 months ago
- [NeurIPS 2024] | An Efficient Recipe for Long Context Extension via Middle-Focused Positional Encoding☆18Updated 11 months ago
- ☆47Updated last year
- X-Reasoner: Towards Generalizable Reasoning Across Modalities and Domains☆47Updated 4 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆33Updated last year
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆47Updated 2 months ago
- ☆25Updated last month
- User-friendly implementation of the Mixture-of-Sparse-Attention (MoSA). MoSA selects distinct tokens for each head with expert choice rou…☆26Updated 4 months ago
- Code for the paper "Cottention: Linear Transformers With Cosine Attention"☆18Updated 11 months ago
- [Technical Report] Official PyTorch implementation code for realizing the technical part of Phantom of Latent representing equipped with …☆61Updated 11 months ago
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆16Updated 6 months ago
- ☆16Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated last year
- ABC: Achieving Better Control of Multimodal Embeddings using VLMs [TMLR2025]☆15Updated 3 weeks ago
- ☆15Updated 3 months ago
- ☆72Updated 7 months ago