apoorvkh / academic-pretraining
$100K or 100 Days: Trade-offs when Pre-Training with Academic Resources
☆93Updated last week
Related projects ⓘ
Alternatives and complementary repositories for academic-pretraining
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆84Updated last week
- code for training & evaluating Contextual Document Embedding models☆93Updated this week
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆177Updated 5 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆112Updated 6 months ago
- σ-GPT: A New Approach to Autoregressive Models☆59Updated 2 months ago
- ☆100Updated 3 months ago
- ☆72Updated 4 months ago
- ☆61Updated 2 months ago
- An introduction to LLM Sampling☆62Updated this week
- Scalable neural net training via automatic normalization in the modular norm.☆119Updated 2 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆77Updated last month
- WIP☆89Updated 2 months ago
- ☆53Updated 9 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆95Updated 6 months ago
- ☆105Updated this week
- Understand and test language model architectures on synthetic tasks.☆161Updated 6 months ago
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆46Updated last week
- Official implementation of MAIA, A Multimodal Automated Interpretability Agent☆62Updated 2 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆60Updated last month
- The official repository for HyperZ⋅Z⋅W Operator Connects Slow-Fast Networks for Full Context Interaction.☆31Updated last month
- Muon optimizer for neural networks: >30% extra sample efficiency, <3% wallclock overhead☆69Updated this week
- Collection of autoregressive model implementation☆66Updated last week
- ☆50Updated last week
- Simple Transformer in Jax☆115Updated 4 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆105Updated 2 weeks ago
- ☆63Updated 4 months ago
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).☆128Updated last week
- ☆122Updated this week
- ☆49Updated 7 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆46Updated 2 months ago