yk / litterLinks
☆71Updated last year
Alternatives and similar repositories for litter
Users that are interested in litter are comparing it to the libraries listed below
Sorting:
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 8 months ago
- ☆53Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆166Updated 6 months ago
- JAX Implementation of Black Forest Labs' Flux.1 family of models☆35Updated 2 weeks ago
- Serialize JAX, Flax, Haiku, or Objax model params with 🤗`safetensors`☆45Updated last year
- The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER).…☆121Updated 2 years ago
- ☆30Updated last year
- σ-GPT: A New Approach to Autoregressive Models☆67Updated last year
- Thispersondoesnotexist went down, so this time, while building it back up, I am going to open source all of it.☆90Updated 2 years ago
- ☆82Updated last year
- ☆61Updated last year
- ☆101Updated last month
- Smol but mighty language model☆63Updated 2 years ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Various handy scripts to quickly setup new Linux and Windows sandboxes, containers and WSL.☆40Updated this week
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Train vision models using JAX and 🤗 transformers☆99Updated this week
- Drop in replacement for OpenAI, but with Open models.☆152Updated 2 years ago
- ☆33Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 3 years ago
- ☆69Updated last year
- Collection of autoregressive model implementation☆86Updated 4 months ago
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆92Updated 2 years ago
- Latent Diffusion Language Models☆69Updated last year
- ☆91Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 3 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 6 months ago
- The Next Generation Multi-Modality Superintelligence☆70Updated 11 months ago