SakanaAI / DroPELinks
Extending the Context of Pretrained LLMs by Dropping Their Positional Embedding
☆193Updated 3 weeks ago
Alternatives and similar repositories for DroPE
Users that are interested in DroPE are comparing it to the libraries listed below
Sorting:
- Esoteric Language Models☆110Updated 2 months ago
- MatFormer repo☆70Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆110Updated 10 months ago
- This repo contains the source code for the paper "Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning"☆288Updated 2 months ago
- 📄Small Batch Size Training for Language Models☆80Updated 3 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆56Updated 10 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆86Updated 4 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆307Updated last month
- The official github repo for "Diffusion Language Models are Super Data Learners".☆219Updated 2 months ago
- Training teachers with reinforcement learning able to make LLMs learn how to reason for test time scaling.☆358Updated 7 months ago
- Large multi-modal models (L3M) pre-training.☆229Updated 4 months ago
- Official JAX implementation of End-to-End Test-Time Training for Long Context☆478Updated 2 weeks ago
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆115Updated 9 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆127Updated 3 months ago
- ☆91Updated last year
- PyTorch implementation of models from the Zamba2 series.☆186Updated last year
- Open-source release accompanying Gao et al. 2025☆498Updated last month
- Universal Reasoning Model☆121Updated 2 weeks ago
- ☆169Updated 4 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆175Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆134Updated 3 months ago
- ☆206Updated last year
- ☆66Updated 10 months ago
- H-Net Dynamic Hierarchical Architecture☆81Updated 4 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆227Updated 2 months ago
- [ICLR 2026] Official PyTorch Implementation of RLP: Reinforcement as a Pretraining Objective☆226Updated last week
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 9 months ago
- The code repository of the paper: Competition and Attraction Improve Model Fusion☆169Updated 5 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆110Updated 8 months ago