zhenyuhe00 / BiPELinks
Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation, ICML 2024
☆22Updated last year
Alternatives and similar repositories for BiPE
Users that are interested in BiPE are comparing it to the libraries listed below
Sorting:
- Codes for Merging Large Language Models☆35Updated last year
- MathFusion: Enhancing Mathematical Problem-solving of LLM through Instruction Fusion (ACL 2025)☆35Updated 5 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year
- A Sober Look at Language Model Reasoning☆92Updated last month
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"☆84Updated last year
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆40Updated last year
- ☆46Updated 9 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆107Updated 3 months ago
- ☆114Updated 3 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- ☆17Updated 5 months ago
- ☆26Updated last month
- [AAAI26] LongLLaDA: Unlocking Long Context Capabilities in Diffusion LLMs☆50Updated last month
- [AAAI 2024] MELO: Enhancing Model Editing with Neuron-indexed Dynamic LoRA☆26Updated last year
- Official repository for paper "DeepCritic: Deliberate Critique with Large Language Models"☆40Updated 6 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆69Updated 5 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆151Updated 6 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆84Updated 6 months ago
- ☆152Updated last year
- Large Language Models Can Self-Improve in Long-context Reasoning☆73Updated last year
- AnchorAttention: Improved attention for LLMs long-context training☆213Updated 11 months ago
- [NeurIPS 2024] A Novel Rank-Based Metric for Evaluating Large Language Models☆56Updated 7 months ago
- The code and data for the paper JiuZhang3.0☆49Updated last year
- Code for "Towards Revealing the Mystery behind Chain of Thought: a Theoretical Perspective"☆21Updated 2 years ago
- Discriminative Constrained Optimization for Reinforcing Large Reasoning Models☆49Updated 2 months ago
- Code for paper "Unraveling Cross-Modality Knowledge Conflicts in Large Vision-Language Models."☆50Updated last year
- Diffusion Language Models For Code Infilling Beyond Fixed-size Canvas☆97Updated 3 months ago
- exploring whether LLMs perform case-based or rule-based reasoning☆30Updated last year
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆156Updated 6 months ago
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆72Updated 5 months ago