HazyResearch / zoology
Understand and test language model architectures on synthetic tasks.
โ175Updated this week
Alternatives and similar repositories for zoology:
Users that are interested in zoology are comparing it to the libraries listed below
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"โ219Updated last month
- A MAD laboratory to improve AI architecture designs ๐งชโ102Updated last month
- โ51Updated 7 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.โ90Updated last month
- nanoGPT-like codebase for LLM trainingโ83Updated this week
- Some preliminary explorations of Mamba's context scaling.โ206Updated 11 months ago
- Language models scale reliably with over-training and on downstream tasksโ96Updated 9 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clustersโ110Updated last month
- โ168Updated last year
- โ135Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmindโ115Updated 4 months ago
- โ180Updated this week
- Token Omission Via Attentionโ122Updated 3 months ago
- โ53Updated 11 months ago
- โ74Updated last year
- โ75Updated 6 months ago
- Multipack distributed sampler for fast padding-free training of LLMsโ184Updated 5 months ago
- supporting pytorch FSDP for optimizersโ75Updated last month
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"โ66Updated 2 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)โ182Updated 7 months ago
- โ135Updated this week
- Triton-based implementation of Sparse Mixture of Experts.โ192Updated last month
- Accelerated First Order Parallel Associative Scanโ169Updated 4 months ago
- โ164Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attentionโ75Updated this week
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).โ136Updated 2 weeks ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT trainingโ121Updated 9 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.โ61Updated 5 months ago
- ๐งฑ Modula software packageโ132Updated this week
- Normalized Transformer (nGPT)โ145Updated last month