PAIR-code / tiny-transformersLinks
β22Updated 3 weeks ago
Alternatives and similar repositories for tiny-transformers
Users that are interested in tiny-transformers are comparing it to the libraries listed below
Sorting:
- π° Computing the information content of trained neural networksβ22Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixingβ49Updated 4 years ago
- DiCE: The Infinitely Differentiable Monte-Carlo Estimatorβ32Updated 2 years ago
- AdaCatβ49Updated 3 years ago
- Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural netwoβ¦β75Updated 7 months ago
- Experimental scripts for researching data adaptive learning rate scheduling.β22Updated 2 years ago
- An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbolsβ16Updated 4 years ago
- Codes accompanying the paper "LaProp: a Better Way to Combine Momentum with Adaptive Gradient"β29Updated 5 years ago
- Memory-efficient transformer. Work in progress.β19Updated 3 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library β€οΈβ57Updated 3 years ago
- Latent Diffusion Language Modelsβ70Updated 2 years ago
- Jax like function transformation engine but micro, microjaxβ34Updated last year
- Implementation of a holodeck, written in Pytorchβ18Updated 2 years ago
- A case study of efficient training of large language models using commodity hardware.β68Updated 3 years ago
- Adversarial examples to the new ConvNeXt architectureβ20Updated 4 years ago
- Various handy scripts to quickly setup new Linux and Windows sandboxes, containers and WSL.β40Updated last week
- β18Updated last year
- β29Updated last year
- PyTorch reimplementation of the paper "HyperMixer: An MLP-based Green AI Alternative to Transformers" [arXiv 2022].β18Updated 3 years ago
- FID computation in Jax/Flax.β29Updated last year
- Deep Networks Grok All the Time and Here is Whyβ38Updated last year
- Re-implementation of 'Grokking: Generalization beyond overfitting on small algorithmic datasets'β39Updated 4 years ago
- A generative modelling toolkit for PyTorch.β70Updated 4 years ago
- Utilities for Training Very Large Modelsβ58Updated last year
- Official code for the paper: "Metadata Archaeology"β19Updated 2 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Trainingβ51Updated 2 years ago
- β34Updated last year
- Implementation of Metaformer, but in an autoregressive mannerβ26Updated 3 years ago
- Serialize JAX, Flax, Haiku, or Objax model params with π€`safetensors`β47Updated last year
- RWKV model implementationβ38Updated 2 years ago