aidangomez / welcomeLinks
Generate a cute welcome message for yourself each day
☆22Updated 2 years ago
Alternatives and similar repositories for welcome
Users that are interested in welcome are comparing it to the libraries listed below
Sorting:
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆188Updated 3 years ago
- ☆53Updated last year
- ☆61Updated 3 years ago
- Fast bare-bones BPE for modern tokenizer training☆164Updated 2 months ago
- JAX implementation of the Llama 2 model☆219Updated last year
- LoRA for arbitrary JAX models and functions☆142Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Train vision models using JAX and 🤗 transformers☆99Updated last week
- ☆307Updated last year
- git extension for {collaborative, communal, continual} model development☆216Updated 9 months ago
- Train very large language models in Jax.☆208Updated last year
- An interactive exploration of Transformer programming.☆269Updated last year
- seqax = sequence modeling + JAX☆166Updated last month
- ☆144Updated 2 years ago
- Resources from the EleutherAI Math Reading Group☆54Updated 6 months ago
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆291Updated last year
- Automatic gradient descent☆210Updated 2 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆349Updated last year
- JAX notebook showing how to LoRA + GPTQ arbitrary models☆10Updated 2 years ago
- A comprehensive deep dive into the world of tokens☆226Updated last year
- JAX Synergistic Memory Inspector☆179Updated last year
- ☆67Updated 3 years ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆220Updated last year
- ☆277Updated last year
- Simple Transformer in Jax☆140Updated last year
- This is a port of Mistral-7B model in JAX☆32Updated last year
- ☆28Updated 2 years ago
- JAX implementation of the Mistral 7b v0.2 model☆35Updated last year