facebookresearch / MemoryMosaics
Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.
☆34Updated last month
Related projects ⓘ
Alternatives and complementary repositories for MemoryMosaics
- The repository contains code for Adaptive Data Optimization☆18Updated last month
- ☆62Updated 3 months ago
- ☆53Updated 10 months ago
- ☆53Updated 3 weeks ago
- Language models scale reliably with over-training and on downstream tasks☆94Updated 7 months ago
- ☆28Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆84Updated this week
- ☆50Updated 6 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆74Updated this week
- A MAD laboratory to improve AI architecture designs 🧪☆95Updated 6 months ago
- Universal Neurons in GPT2 Language Models☆27Updated 5 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆48Updated 7 months ago
- ☆41Updated 8 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆61Updated last week
- ☆44Updated last year
- Efficient Scaling laws and collaborative pretraining.☆13Updated this week
- ☆63Updated 4 months ago
- ☆55Updated last month
- Experiments for efforts to train a new and improved t5☆76Updated 7 months ago
- This repository includes code to reproduce the tables in "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explaine…☆34Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- Using FlexAttention to compute attention with different masking patterns☆40Updated 2 months ago
- ☆22Updated 2 weeks ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆79Updated last year
- Sparse and discrete interpretability tool for neural networks☆55Updated 9 months ago
- Triton Implementation of HyperAttention Algorithm☆46Updated 11 months ago
- ☆46Updated last month
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆34Updated 8 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆78Updated 8 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆113Updated 7 months ago