SakanaAI / DroPELinks
Extending the Context of Pretrained LLMs by Dropping Their Positional Embedding
β203Updated 3 weeks ago
Alternatives and similar repositories for DroPE
Users that are interested in DroPE are comparing it to the libraries listed below
Sorting:
- πSmall Batch Size Training for Language Modelsβ80Updated 4 months ago
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"β292Updated 2 months ago
- Esoteric Language Modelsβ111Updated this week
- β91Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)β110Updated 11 months ago
- Official repo of paper LM2β46Updated 11 months ago
- MatFormer repoβ70Updated last year
- Universal Reasoning Modelβ122Updated 3 weeks ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrunβ56Updated 11 months ago
- EvaByte: Efficient Byte-level Language Models at Scaleβ115Updated 9 months ago
- The official github repo for "Diffusion Language Models are Super Data Learners".β221Updated 3 months ago
- PyTorch implementation of models from the Zamba2 series.β186Updated last year
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"β86Updated 4 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languagβ¦β128Updated 4 months ago
- [ICLR 2026] Official PyTorch Implementation of RLP: Reinforcement as a Pretraining Objectiveβ231Updated 2 weeks ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.β358Updated 7 months ago
- β171Updated last week
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmindβ135Updated 3 months ago
- Simple & Scalable Pretraining for Neural Architecture Researchβ307Updated 2 months ago
- DeMo: Decoupled Momentum Optimizationβ198Updated last year
- [ICLR 2026] GRAPE: Group Representational Position Encoding (https://arxiv.org/abs/2512.07805)β78Updated 2 weeks ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.β175Updated last year
- β67Updated 10 months ago
- Large multi-modal models (L3M) pre-training.β230Updated 4 months ago
- Official JAX implementation of End-to-End Test-Time Training for Long Contextβ520Updated 2 weeks ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the bestβ¦β59Updated 10 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) modelsβ228Updated 3 months ago
- H-Net Dynamic Hierarchical Architectureβ81Updated 4 months ago
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's lβ¦β54Updated 3 weeks ago
- RWKV-7: Surpassing GPTβ104Updated last year