HazyResearch / zoology
Understand and test language model architectures on synthetic tasks.
โ163Updated 6 months ago
Related projects โ
Alternatives and complementary repositories for zoology
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"โ214Updated this week
- A MAD laboratory to improve AI architecture designs ๐งชโ95Updated 6 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clustersโ104Updated 2 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.โ84Updated this week
- Token Omission Via Attentionโ121Updated last month
- โ50Updated 6 months ago
- Some preliminary explorations of Mamba's context scaling.โ191Updated 9 months ago
- Normalized Transformer (nGPT)โ87Updated this week
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingโ113Updated 7 months ago
- Language models scale reliably with over-training and on downstream tasksโ94Updated 7 months ago
- Multipack distributed sampler for fast padding-free training of LLMsโ178Updated 3 months ago
- Griffin MQA + Hawk Linear RNN Hybridโ85Updated 6 months ago
- โ132Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attentionโ71Updated last month
- โ74Updated 11 months ago
- โ53Updated 10 months ago
- โ73Updated 4 months ago
- Accelerated First Order Parallel Associative Scanโ164Updated 3 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)โ180Updated 5 months ago
- some common Huggingface transformers in maximal update parametrization (ยตP)โ77Updated 2 years ago
- โ109Updated this week
- โ175Updated this week
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"โ181Updated last month
- โ77Updated 7 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmindโ112Updated 3 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)โ61Updated 7 months ago
- NanoGPT-like codebase for LLM trainingโ75Updated this week
- Randomized Positional Encodings Boost Length Generalization of Transformersโ78Updated 8 months ago
- seqax = sequence modeling + JAXโ134Updated 4 months ago
- โ161Updated last year