sayakpaul / count-tokens-hf-datasetsLinks
This project shows how to derive the total number of training tokens from a large text dataset from π€ datasets with Apache Beam and Dataflow.
β26Updated 2 years ago
Alternatives and similar repositories for count-tokens-hf-datasets
Users that are interested in count-tokens-hf-datasets are comparing it to the libraries listed below
Sorting:
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- PyTorch implementation of GLOMβ22Updated 3 years ago
- Embedding Recycling for Language modelsβ38Updated last year
- My explorations into editing the knowledge and memories of an attention networkβ35Updated 2 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorchβ39Updated 3 years ago
- Contains my experiments with the `big_vision` repo to train ViTs on ImageNet-1k.β22Updated 2 years ago
- β72Updated last year
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β34Updated last year
- A python library for highly configurable transformers - easing model architecture search and experimentation.β49Updated 3 years ago
- Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://aβ¦β46Updated 2 years ago
- β13Updated 6 years ago
- Ranking of fine-tuned HF models as base models.β35Updated 3 weeks ago
- β67Updated 2 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixingβ50Updated 3 years ago
- Official code release for the paper Coder Reviewer Reranking for Code Generation.β43Updated 2 years ago
- β44Updated 6 months ago
- β54Updated last year
- An implementation of Transformer with Expire-Span, a circuit for learning which memories to retainβ34Updated 4 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!β111Updated 2 years ago
- A case study of efficient training of large language models using commodity hardware.β69Updated 2 years ago
- A diff tool for language modelsβ42Updated last year
- Google's BigBird (Jax/Flax & PyTorch) @ π€Transformersβ49Updated 2 years ago
- β44Updated 3 years ago
- β98Updated 2 years ago
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learningβ14Updated 3 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorchβ74Updated 2 years ago
- β19Updated 2 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuningβ98Updated 2 years ago
- β21Updated 3 years ago