google / seqio
Task-based datasets, preprocessing, and evaluation for sequence models.
☆561Updated this week
Related projects ⓘ
Alternatives and complementary repositories for seqio
- ☆334Updated 7 months ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆507Updated last year
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging F…☆563Updated last year
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆457Updated last week
- Organize your experiments into discrete steps that can be cached and reused throughout the lifetime of your research project.☆534Updated 5 months ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆851Updated last year
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆457Updated 2 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆309Updated last year
- ☆229Updated 4 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆729Updated 11 months ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆980Updated 3 months ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆431Updated last year
- Sequence modeling with Mega.☆298Updated last year
- ☆178Updated last week
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆433Updated 2 years ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆516Updated this week
- ☆315Updated 3 years ago
- Rax is a Learning-to-Rank library written in JAX.☆319Updated 3 weeks ago
- Scaling Data-Constrained Language Models☆321Updated last month
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆778Updated last year
- An open collection of implementation tips, tricks and resources for training large language models☆460Updated last year
- Efficient, check-pointed data loading for deep learning with massive data sets.☆205Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆225Updated 2 months ago
- Language Modeling with the H3 State Space Model☆513Updated last year
- A prize for finding tasks that cause large language models to show inverse scaling☆597Updated last year
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆611Updated 2 years ago
- Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint☆361Updated 7 months ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,296Updated 5 months ago
- Train very large language models in Jax.☆195Updated last year
- Library for 8-bit optimizers and quantization routines.☆714Updated 2 years ago