facebookresearch / MemoryMosaicsLinks
Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.
☆45Updated 5 months ago
Alternatives and similar repositories for MemoryMosaics
Users that are interested in MemoryMosaics are comparing it to the libraries listed below
Sorting:
- ☆53Updated last year
- ☆82Updated 10 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆123Updated 7 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆75Updated 8 months ago
- ☆97Updated 9 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆77Updated 7 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆147Updated 2 weeks ago
- ☆32Updated last year
- ☆27Updated 5 months ago
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- ☆53Updated last year
- ☆11Updated 4 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆103Updated 2 months ago
- ☆33Updated 6 months ago
- ☆53Updated last year
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated 2 weeks ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆40Updated 9 months ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆29Updated last year
- Mixture of A Million Experts☆46Updated 11 months ago
- ☆32Updated last year
- ☆45Updated last year
- Experiments on the impact of depth in transformers and SSMs.☆32Updated 8 months ago
- Universal Neurons in GPT2 Language Models☆30Updated last year
- Experiments for efforts to train a new and improved t5☆76Updated last year
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆110Updated 10 months ago
- Token Omission Via Attention☆128Updated 9 months ago
- ☆87Updated last year
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Understand and test language model architectures on synthetic tasks.☆219Updated last month