AlxSp / t-jepaLinks
☆11Updated last year
Alternatives and similar repositories for t-jepa
Users that are interested in t-jepa are comparing it to the libraries listed below
Sorting:
- Training Models Daily☆16Updated last year
- Cerule - A Tiny Mighty Vision Model☆67Updated last year
- GoldFinch and other hybrid transformer components☆12Updated 3 weeks ago
- [WIP] Transformer to embed Danbooru labelsets☆13Updated last year
- Lightweight package that tracks and summarizes code changes using LLMs (Large Language Models)☆34Updated 8 months ago
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated 6 months ago
- ☆40Updated last year
- ☆28Updated last year
- ☆50Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated 10 months ago
- JAX Scalify: end-to-end scaled arithmetics☆16Updated last year
- ☆34Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆107Updated 8 months ago
- Collection of autoregressive model implementation☆86Updated 6 months ago
- Synthetic data derived by templating, few shot prompting, transformations on public domain corpora, and monte carlo tree search.☆32Updated last month
- GoldFinch and other hybrid transformer components☆45Updated last year
- Simplex Random Feature attention, in PyTorch☆73Updated 2 years ago
- Latent Large Language Models☆19Updated last year
- ☆15Updated 2 years ago
- ☆63Updated last year
- Lego for GRPO☆30Updated 5 months ago
- Official repo for Learning to Reason for Long-Form Story Generation☆72Updated 6 months ago
- DeMo: Decoupled Momentum Optimization☆196Updated 11 months ago
- Focused on fast experimentation and simplicity☆75Updated 10 months ago
- ☆36Updated 3 months ago
- https://hf.co/hexgrad/Kokoro-82M☆14Updated 8 months ago
- alternative way to calculating self attention☆18Updated last year
- An introduction to LLM Sampling☆79Updated 10 months ago
- ☆136Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year