minyoungg / platonic-rep
β519Updated last week
Alternatives and similar repositories for platonic-rep:
Users that are interested in platonic-rep are comparing it to the libraries listed below
- [ICLR2025 Spotlightπ₯] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parametersβ548Updated 2 months ago
- Public repository for "The Surprising Effectiveness of Test-Time Training for Abstract Reasoning"β304Updated 5 months ago
- code for "Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion"β812Updated 3 weeks ago
- Official JAX implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ405Updated 8 months ago
- [ICML 2024 Best Paper] Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution (https://arxiv.org/abs/2310.16834)β556Updated last year
- Implementation of π Ring Attention, from Liu et al. at Berkeley AI, in Pytorchβ511Updated 5 months ago
- Block Diffusion: Interpolating Between Autoregressive and Diffusion Language Modelsβ536Updated last week
- β451Updated 9 months ago
- Muon optimizer: +>30% sample efficiency with <3% wallclock overheadβ575Updated 3 weeks ago
- Annotated version of the Mamba paperβ483Updated last year
- Some preliminary explorations of Mamba's context scaling.β212Updated last year
- ViT Prisma is a mechanistic interpretability library for Vision Transformers (ViTs).β218Updated this week
- A curated list for awesome discrete diffusion models resources.β299Updated 2 months ago
- This repo contains the code for the paper "Intuitive physics understanding emerges fromself-supervised pretraining on natural videos"β133Updated 2 months ago
- GPT4 based personalized ArXiv paper assistant botβ516Updated last year
- Helpful tools and examples for working with flex-attentionβ726Updated last week
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAIβ279Updated last month
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorchβ328Updated 10 months ago
- β610Updated last year
- Simple and Effective Masked Diffusion Language Modelβ368Updated last week
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.β407Updated last year
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAIβ1,061Updated last month
- Normalized Transformer (nGPT)β168Updated 5 months ago
- Pretraining code for a large-scale depth-recurrent language modelβ743Updated last week
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attentionβ¦β288Updated 11 months ago
- Reading list for research topics in state-space modelsβ280Updated last week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsβ¦β317Updated 4 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modelingβ862Updated 2 months ago
- Official implementation of paper: SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-trainingβ263Updated last month
- Official PyTorch implementation of Learning to (Learn at Test Time): RNNs with Expressive Hidden Statesβ1,177Updated 9 months ago