google-research / meliadLinks
☆256Updated last month
Alternatives and similar repositories for meliad
Users that are interested in meliad are comparing it to the libraries listed below
Sorting:
- Neural Networks and the Chomsky Hierarchy☆207Updated last year
- ☆361Updated last year
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆215Updated 11 months ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆136Updated last year
- Sequence modeling with Mega.☆296Updated 2 years ago
- Recurrent Memory Transformer☆150Updated last year
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆184Updated 2 years ago
- Train very large language models in Jax.☆205Updated last year
- Understand and test language model architectures on synthetic tasks.☆221Updated 2 weeks ago
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"☆318Updated 10 months ago
- Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale, TACL (2022)☆127Updated last month
- Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"☆208Updated 2 years ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆412Updated 6 months ago
- Inference code for LLaMA models in JAX☆118Updated last year
- ☆166Updated 2 years ago
- Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate …☆634Updated 2 years ago
- Language Modeling with the H3 State Space Model☆519Updated last year
- JAX implementation of the Llama 2 model☆219Updated last year
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆207Updated last month
- Implementation of https://srush.github.io/annotated-s4☆500Updated last month
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆228Updated 10 months ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 3 years ago
- Scaling Data-Constrained Language Models☆338Updated last month
- ☆540Updated last year
- ☆159Updated 2 years ago
- Efficient Transformers with Dynamic Token Pooling☆62Updated 2 years ago
- some common Huggingface transformers in maximal update parametrization (µP)☆82Updated 3 years ago
- Task-based datasets, preprocessing, and evaluation for sequence models.☆583Updated this week
- Randomized Positional Encodings Boost Length Generalization of Transformers☆82Updated last year