☆106Jun 20, 2023Updated 2 years ago
Alternatives and similar repositories for Parallel-Context-Windows
Users that are interested in Parallel-Context-Windows are comparing it to the libraries listed below
Sorting:
- Naive Bayes-based Context Extension☆327Dec 9, 2024Updated last year
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Jan 4, 2024Updated 2 years ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Mar 12, 2024Updated 2 years ago
- A Structured Span Selector (NAACL 2022). A structured span selector with a WCFG for span selection tasks (coreference resolution, semanti…☆21Jul 11, 2022Updated 3 years ago
- ☆30Sep 28, 2023Updated 2 years ago
- LongBench v2 and LongBench (ACL 25'&24')☆1,122Jan 15, 2025Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,685Apr 17, 2024Updated last year
- Fast and Modularized CFG-focused Models☆23Nov 8, 2023Updated 2 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Mar 15, 2024Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆426Dec 20, 2023Updated 2 years ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Jul 1, 2023Updated 2 years ago
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆69Sep 19, 2021Updated 4 years ago
- Advanced Formal Language Theory (263-5352-00L; Frühjahr 2023)☆10Feb 21, 2023Updated 3 years ago
- Chrome extension for OA sites like arxiv, openreivew: 1. PDF back to abstract page, 2. Rename PDF page with paper title.☆18Oct 12, 2023Updated 2 years ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- ☆18Mar 10, 2023Updated 3 years ago
- ☆11Jun 5, 2024Updated last year
- Rectified Rotary Position Embeddings☆388May 20, 2024Updated last year
- Code for "Mixed Cross Entropy Loss for Neural Machine Translation"☆20Jul 23, 2021Updated 4 years ago
- ☆32Jan 7, 2024Updated 2 years ago
- Blog post☆17Feb 16, 2024Updated 2 years ago
- Unsupervised Natural Language Parsing (Tutorial)☆22Apr 19, 2021Updated 4 years ago
- ☆52Jan 19, 2023Updated 3 years ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Sep 4, 2023Updated 2 years ago
- ☆31Jul 2, 2023Updated 2 years ago
- LongAttn :Selecting Long-context Training Data via Token-level Attention☆15Jul 16, 2025Updated 8 months ago
- Curse-of-memory phenomenon of RNNs in sequence modelling☆19May 8, 2025Updated 10 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆490Mar 19, 2024Updated 2 years ago
- ☆11Oct 11, 2023Updated 2 years ago
- Official Repository for Efficient Linear-Time Attention Transformers.☆18Jun 2, 2024Updated last year
- Official repository for LongChat and LongEval☆533May 24, 2024Updated last year
- Code for the ACL 2021 paper "Structural Guidance for Transformer Language Models"☆13Sep 17, 2025Updated 6 months ago
- Official Documentation for DSPy Library☆21Updated this week
- ☆29Jul 9, 2024Updated last year
- FocusLLM: Scaling LLM’s Context by Parallel Decoding☆44Dec 8, 2024Updated last year
- Code for the "Long Context Needs Some R&R" paper.☆12Mar 11, 2024Updated 2 years ago
- The tool facilitates debugging convergence issues and testing new algorithms and recipes for training LLMs using Nvidia libraries such as…☆19Sep 17, 2025Updated 6 months ago
- Code for NeurIPS 2022 Spotlight paper " Non-Monotonic Latent Alignments for CTC-Based Non-Autoregressive Machine Translation"☆20Nov 16, 2022Updated 3 years ago
- Fine-Tuning Pre-trained Transformers into Decaying Fast Weights☆19Oct 9, 2022Updated 3 years ago