RodkinIvan / associative-recurrent-memory-transformerLinks
[ICML 24 NGSM workshop] Associative Recurrent Memory Transformer implementation and scripts for training and evaluation
☆56Updated this week
Alternatives and similar repositories for associative-recurrent-memory-transformer
Users that are interested in associative-recurrent-memory-transformer are comparing it to the libraries listed below
Sorting:
- GoldFinch and other hybrid transformer components☆45Updated last year
 - A repository for research on medium sized language models.☆78Updated last year
 - The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆35Updated 2 weeks ago
 - ☆86Updated last year
 - Using FlexAttention to compute attention with different masking patterns☆47Updated last year
 - Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
 - ☆39Updated last year
 - Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 4 months ago
 - Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated last year
 - EvaByte: Efficient Byte-level Language Models at Scale☆110Updated 6 months ago
 - ☆24Updated 7 months ago
 - The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆109Updated 3 weeks ago
 - Official repository for "BLEUBERI: BLEU is a surprisingly effective reward for instruction following"☆28Updated 4 months ago
 - Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
 - Code Implementation, Evaluations, Documentation, Links and Resources for Min P paper☆43Updated 2 months ago
 - [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆92Updated 11 months ago
 - Aioli: A unified optimization framework for language model data mixing☆28Updated 9 months ago
 - This is the official repository for Inheritune.☆115Updated 8 months ago
 - Exploration of automated dataset selection approaches at large scales.☆48Updated 7 months ago
 - Linear Attention Sequence Parallelism (LASP)☆87Updated last year
 - Code for RATIONALYST: Pre-training Process-Supervision for Improving Reasoning https://arxiv.org/pdf/2410.01044☆35Updated last year
 - ☆108Updated last year
 - Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 11 months ago
 - Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated 2 weeks ago
 - Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆59Updated last year
 - ☆74Updated last year
 - Synthetic data generation and benchmark implementation for "Episodic Memories Generation and Evaluation Benchmark for Large Language Mode…☆56Updated last month
 - Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year
 - Replicating O1 inference-time scaling laws☆90Updated 11 months ago
 - ☆20Updated last year