EleutherAI / best-downloadLinks
URL downloader supporting checkpointing and continuous checksumming.
β19Updated last year
Alternatives and similar repositories for best-download
Users that are interested in best-download are comparing it to the libraries listed below
Sorting:
- **ARCHIVED** Filesystem interface to π€ Hubβ58Updated 2 years ago
- GPT-jax based on the official huggingface libraryβ13Updated 4 years ago
- Few Shot Learning using EleutherAI's GPT-Neo an Open-source version of GPT-3β18Updated 4 years ago
- One stop shop for all things carpβ59Updated 3 years ago
- Developing tools to automatically analyze datasetsβ75Updated 10 months ago
- Scripts to convert datasets from various sources to Hugging Face Datasets.β57Updated 2 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 3 years ago
- Experiments with generating opensource language model assistantsβ97Updated 2 years ago
- β90Updated 3 years ago
- A client library for LAION's effort to filter CommonCrawl with CLIP, building a large scale image-text dataset.β32Updated 2 years ago
- π€ Disaggregators: Curated data labelers for in-depth analysis.β66Updated 2 years ago
- β33Updated 2 years ago
- β79Updated last year
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β94Updated 2 years ago
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)β61Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β18Updated 2 years ago
- Helper scripts and notes that were used while porting various nlp modelsβ47Updated 3 years ago
- Efficiently computing & storing token n-grams from large corporaβ26Updated 11 months ago
- A library for squeakily cleaning and filtering language datasets.β47Updated 2 years ago
- Hugging Face and Pyserini interoperabilityβ19Updated 2 years ago
- This project shows how to derive the total number of training tokens from a large text dataset from π€ datasets with Apache Beam and Dataβ¦β27Updated 2 years ago
- Babysit your preemptible TPUsβ86Updated 2 years ago
- See https://github.com/cuda-mode/triton-index/ instead!β11Updated last year
- Python tools for processing the stackexchange data dumps into a text dataset for Language Modelsβ84Updated last year
- A diff tool for language modelsβ44Updated last year
- β87Updated 3 years ago
- β40Updated 2 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/β¦β27Updated last year
- β30Updated 3 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β34Updated 2 years ago