aidangomez / welcomeLinks
Generate a cute welcome message for yourself each day
☆22Updated 2 years ago
Alternatives and similar repositories for welcome
Users that are interested in welcome are comparing it to the libraries listed below
Sorting:
- ☆63Updated 3 years ago
- Resources from the EleutherAI Math Reading Group☆54Updated 11 months ago
- ☆53Updated 2 years ago
- Various handy scripts to quickly setup new Linux and Windows sandboxes, containers and WSL.☆40Updated last week
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆190Updated 3 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated 2 years ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated 2 years ago
- git extension for {collaborative, communal, continual} model development☆217Updated last year
- Latent Diffusion Language Models☆70Updated 2 years ago
- Automatically take good care of your preemptible TPUs☆37Updated 2 years ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- Experiments for efforts to train a new and improved t5☆76Updated last year
- An implementation of the Llama architecture, to instruct and delight☆21Updated 8 months ago
- ☆13Updated 4 years ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- ☆20Updated 2 years ago
- ☆28Updated 3 years ago
- Train vision models using JAX and 🤗 transformers☆100Updated last month
- Various transformers for FSDP research☆38Updated 3 years ago
- Scaling is a distributed training library and installable dependency designed to scale up neural networks, with a dedicated module for tr…☆66Updated 2 months ago
- JAX implementation of the Llama 2 model☆216Updated 2 years ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆61Updated 3 years ago
- Implementation of the Llama architecture with RLHF + Q-learning☆170Updated last year
- An interactive exploration of Transformer programming.☆271Updated 2 years ago
- A set of Python scripts that makes your experience on TPU better☆56Updated 4 months ago
- Experiment of using Tangent to autodiff triton☆82Updated 2 years ago
- ☆68Updated last year