Recurrent Memory Transformer
☆155Aug 14, 2023Updated 2 years ago
Alternatives and similar repositories for LM-RMT
Users that are interested in LM-RMT are comparing it to the libraries listed below
Sorting:
- [NeurIPS 22] [AAAI 24] Recurrent Transformer-based long-context architecture.☆775Oct 25, 2024Updated last year
- ☆68Aug 29, 2024Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆331Sep 9, 2024Updated last year
- [EMNLP 2023]Context Compression for Auto-regressive Transformers with Sentinel Tokens☆25Nov 6, 2023Updated 2 years ago
- Official Implementation of ACL2023: Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span …☆14Aug 25, 2023Updated 2 years ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆67Apr 24, 2024Updated last year
- Tools and scripts for experimenting with Transformers: Bert, T5...☆61Jan 6, 2024Updated 2 years ago
- Building language models to predict more than one token ahead to enable further ahead predictions.☆12May 22, 2025Updated 9 months ago
- ☆29Jul 9, 2024Updated last year
- ☆62Jun 17, 2024Updated last year
- This is the official implementation of the paper: "Contrastive Learning of Sentence Embeddings from Scratch"☆40Jun 9, 2023Updated 2 years ago
- Public repo for the NeurIPS 2023 paper "Unlimiformer: Long-Range Transformers with Unlimited Length Input"☆1,065Mar 7, 2024Updated last year
- ☆15Nov 20, 2023Updated 2 years ago
- Code for ICML 2024 paper☆35Sep 18, 2025Updated 5 months ago
- code for COLING paper "A Hybrid Model of Classification and Generation for Spatial Relation Extraction"☆10Oct 20, 2022Updated 3 years ago
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆56Dec 4, 2024Updated last year
- ☆106Jun 20, 2023Updated 2 years ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆239Sep 2, 2025Updated 6 months ago
- Scripts for downloading and pre-processing the `proof-pile`, a high quality dataset of mathematical text and code.☆22Nov 26, 2022Updated 3 years ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆63Apr 18, 2024Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Nov 26, 2023Updated 2 years ago
- Official implementation of TransNormerLLM: A Faster and Better LLM☆252Jan 23, 2024Updated 2 years ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆552Oct 30, 2023Updated 2 years ago
- Official repository for our EACL 2023 paper "LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization" (https…☆44Aug 10, 2024Updated last year
- ☆52Jan 19, 2023Updated 3 years ago
- Implementation for ACL 2024 paper "Meta-Task Prompting Elicits Embeddings from Large Language Models"☆12Jul 25, 2024Updated last year
- Implementation of "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆40Nov 11, 2024Updated last year
- Advanced Formal Language Theory (263-5352-00L; Frühjahr 2023)☆10Feb 21, 2023Updated 3 years ago
- [ACL 2023] Are Pre-trained Language Models Useful for Model Ensemble in Chinese Grammatical Error Correction?☆10Dec 15, 2025Updated 2 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Oct 1, 2025Updated 5 months ago
- SimplifiedTransformer simplifies transformer block without affecting training. Skip connections, projection parameters, sequential sub-bl…☆15Feb 6, 2026Updated 3 weeks ago
- Code to train Sentence BERT Japanese model for Hugging Face Model Hub☆11Aug 8, 2021Updated 4 years ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,673Apr 17, 2024Updated last year
- [ICML 24 NGSM workshop] Associative Recurrent Memory Transformer implementation and scripts for training and evaluation☆61Updated this week
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,463Nov 7, 2023Updated 2 years ago
- ☆51Jan 28, 2024Updated 2 years ago
- ☆184May 26, 2023Updated 2 years ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated last year
- [ICLR 2023] PyTorch code of Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees☆24Jun 19, 2023Updated 2 years ago