google / seqioLinks
Task-based datasets, preprocessing, and evaluation for sequence models.
☆585Updated last week
Alternatives and similar repositories for seqio
Users that are interested in seqio are comparing it to the libraries listed below
Sorting:
- ☆361Updated last year
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging F…☆573Updated last year
- Organize your experiments into discrete steps that can be cached and reused throughout the lifetime of your research project.☆563Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆871Updated last year
- Code for the ALiBi method for transformer language models (ICLR 2022)☆539Updated last year
- ☆246Updated 5 years ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)☆462Updated 2 years ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,006Updated last year
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆314Updated last year
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆526Updated this week
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆791Updated 2 years ago
- An open collection of implementation tips, tricks and resources for training large language models☆478Updated 2 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆762Updated last year
- Cramming the training of a (BERT-type) language model into limited compute.☆1,345Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways☆824Updated 2 years ago
- maximal update parametrization (µP)☆1,584Updated last year
- Language Modeling with the H3 State Space Model☆519Updated last year
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"☆456Updated last year
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆610Updated 2 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆177Updated 2 years ago
- Sequence modeling with Mega.☆297Updated 2 years ago
- A prize for finding tasks that cause large language models to show inverse scaling☆614Updated last year
- A platform for managing machine learning experiments☆866Updated last month
- Seminar on Large Language Models (COMP790-101 at UNC Chapel Hill, Fall 2022)☆311Updated 2 years ago
- Scaling Data-Constrained Language Models☆339Updated last month
- Used for adaptive human in the loop evaluation of language and embedding models.☆312Updated 2 years ago
- ☆67Updated 3 years ago
- ☆255Updated 2 months ago
- ☆188Updated 3 weeks ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆113Updated 2 years ago