google-research / meliadLinks
☆256Updated last month
Alternatives and similar repositories for meliad
Users that are interested in meliad are comparing it to the libraries listed below
Sorting:
- Neural Networks and the Chomsky Hierarchy☆206Updated last year
- ☆356Updated last year
- Sequence modeling with Mega.☆296Updated 2 years ago
- ☆166Updated 2 years ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆182Updated 2 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆136Updated last year
- Understand and test language model architectures on synthetic tasks.☆219Updated last month
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆214Updated 10 months ago
- Recurrent Memory Transformer☆150Updated last year
- [NeurIPS 2023] Learning Transformer Programs☆161Updated last year
- Inference code for LLaMA models in JAX☆118Updated last year
- Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"☆208Updated last year
- Language Modeling with the H3 State Space Model☆519Updated last year
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆208Updated last month
- Implementation of Block Recurrent Transformer - Pytorch☆220Updated 10 months ago
- Train very large language models in Jax.☆205Updated last year
- JAX implementation of the Llama 2 model☆219Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆410Updated 6 months ago
- Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate …☆634Updated last year
- Implementation of https://srush.github.io/annotated-s4☆498Updated 3 weeks ago
- ☆159Updated 2 years ago
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆77Updated 3 years ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆230Updated 10 months ago
- Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale, TACL (2022)☆127Updated 3 weeks ago
- some common Huggingface transformers in maximal update parametrization (µP)☆81Updated 3 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 3 years ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆237Updated last month
- ☆83Updated last year
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆290Updated last year