microsoft / LongRoPE
LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.
☆225Updated 8 months ago
Alternatives and similar repositories for LongRoPE:
Users that are interested in LongRoPE are comparing it to the libraries listed below
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆406Updated 6 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆153Updated 3 weeks ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆136Updated 9 months ago
- ☆287Updated last month
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆163Updated this week
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆336Updated 2 weeks ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆322Updated 4 months ago
- Tina: Tiny Reasoning Models via LoRA☆164Updated last week
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆222Updated last month
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆213Updated last month
- Simple extension on vLLM to help you speed up reasoning model without training.☆148Updated this week
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆141Updated 2 weeks ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆291Updated this week
- Reproducible, flexible LLM evaluations☆197Updated last month
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆409Updated last year
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆195Updated last month
- [ICML 2024] CLLMs: Consistency Large Language Models☆390Updated 5 months ago
- ☆192Updated 2 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆186Updated 3 weeks ago
- A project to improve skills of large language models☆354Updated this week
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆101Updated 3 months ago
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆242Updated 2 weeks ago
- The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark" [NeurIPS 2024]☆237Updated 2 months ago
- The official evaluation suite and dynamic data release for MixEval.☆238Updated 5 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆459Updated last year
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆456Updated 2 months ago
- PyTorch building blocks for the OLMo ecosystem☆205Updated this week
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆354Updated 8 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆177Updated last month
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆158Updated 10 months ago