"Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding" Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang.
☆34May 7, 2024Updated last year
Alternatives and similar repositories for Ms-PoE
Users that are interested in Ms-PoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆16Jun 25, 2025Updated 10 months ago
- Code for paper: Long cOntext aliGnment via efficient preference Optimization☆24Oct 10, 2025Updated 6 months ago
- To mitigate position bias in LLMs, especially in long-context scenarios, we scale only one dimension of LLMs, reducing position bias and …☆11Jun 18, 2024Updated last year
- ☆25Dec 12, 2025Updated 4 months ago
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆377Jan 4, 2024Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Google 공식 Rouge Implementation을 한국어에서 사용할 수 있도록 처리☆17Jan 3, 2024Updated 2 years ago
- NLP Project + pytorch☆10Oct 17, 2020Updated 5 years ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆250Sep 12, 2025Updated 7 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆238Aug 2, 2024Updated last year
- ☆12Jun 13, 2025Updated 10 months ago
- ☆20Jan 16, 2024Updated 2 years ago
- 同济大学编译原理课程设计☆11Jun 12, 2021Updated 4 years ago
- The HELMET Benchmark☆214Apr 17, 2026Updated 2 weeks ago
- ☆47Mar 15, 2025Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated 2 years ago
- Comparing retrieval abilities from GPT4-Turbo and a RAG system on a toy example for various context lengths☆35Dec 1, 2023Updated 2 years ago
- Applies ROME and MEMIT on Mamba-S4 models☆15Apr 5, 2024Updated 2 years ago
- The official implementation of "LightTransfer: Your Long-Context LLM is Secretly a Hybrid Model with Effortless Adaptation"☆22Apr 22, 2025Updated last year
- flask+bootstrap实现的web小应用,实现了全文检索(拼写检查及纠错、倒排索引、tf-idf文档排序) 和文章浏览(文章简介、阅读原文)☆16Dec 8, 2022Updated 3 years ago
- 한국어 자연어 처리 모델 미세조정☆17Jan 26, 2021Updated 5 years ago
- 一个基于elasticsearch开发的搜索引擎网站☆14Nov 22, 2022Updated 3 years ago
- Long Context Research☆32Jan 26, 2026Updated 3 months ago
- The repository for papaer "Distance between Relevant Information Pieces Causes Bias in Long-Context LLMs"☆14Dec 16, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- LongMIT: Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets☆42Sep 30, 2024Updated last year
- Fine-tune of Florence-2 for shot categorization.☆26Mar 6, 2025Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆248Jun 15, 2025Updated 10 months ago
- ucas hpc course code☆15May 24, 2023Updated 2 years ago
- ☆12Nov 1, 2020Updated 5 years ago
- ☆14May 9, 2024Updated last year
- The code of our paper "InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Mem…☆401Apr 20, 2024Updated 2 years ago
- Simplified packaging for pybind11-based C++ extensions☆13Jun 3, 2022Updated 3 years ago
- 16-fold memory access reduction with nearly no loss☆108Mar 26, 2025Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- KV cache compression for high-throughput LLM inference☆157Feb 5, 2025Updated last year
- Linux /proc data in a consistent, parsed format.☆10Mar 28, 2016Updated 10 years ago
- ☆46Apr 13, 2022Updated 4 years ago
- [ICML‘25] Official code for paper "Occult: Optimizing Collaborative Communication across Experts for Accelerated Parallel MoE Training an…☆13Apr 17, 2025Updated last year
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆247Sep 2, 2025Updated 8 months ago
- Official pytorch code for "APP: Anytime Progressive Pruning" (DyNN @ ICML, 2022; CLL @ ACML, 2022, SNN @ ICML, 2022 and SlowDNN 2023)☆16Nov 22, 2022Updated 3 years ago
- Official implementation for Text Generation Beyond Discrete Token Sampling☆25Aug 11, 2025Updated 8 months ago