EleutherAI / best-downloadLinks
URL downloader supporting checkpointing and continuous checksumming.
β19Updated last year
Alternatives and similar repositories for best-download
Users that are interested in best-download are comparing it to the libraries listed below
Sorting:
- GPT-jax based on the official huggingface libraryβ13Updated 4 years ago
- **ARCHIVED** Filesystem interface to π€ Hubβ58Updated 2 years ago
- One stop shop for all things carpβ59Updated 3 years ago
- Developing tools to automatically analyze datasetsβ75Updated last year
- Few Shot Learning using EleutherAI's GPT-Neo an Open-source version of GPT-3β18Updated 4 years ago
- β92Updated 3 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 3 years ago
- A library for squeakily cleaning and filtering language datasets.β47Updated 2 years ago
- Efficiently computing & storing token n-grams from large corporaβ26Updated last year
- β78Updated last year
- Experiments with generating opensource language model assistantsβ97Updated 2 years ago
- β33Updated 2 years ago
- Convenient Text-to-Text Training for Transformersβ19Updated 3 years ago
- π€ Disaggregators: Curated data labelers for in-depth analysis.β67Updated 2 years ago
- Anh - LAION's multilingual assistant datasets and modelsβ27Updated 2 years ago
- My explorations into editing the knowledge and memories of an attention networkβ34Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β96Updated 2 years ago
- A client library for LAION's effort to filter CommonCrawl with CLIP, building a large scale image-text dataset.β31Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β35Updated 2 years ago
- Scripts to convert datasets from various sources to Hugging Face Datasets.β57Updated 3 years ago
- Python Research Frameworkβ106Updated 3 years ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found heβ¦β31Updated 2 years ago
- β31Updated this week
- β22Updated 9 months ago
- Simple Python client for the Hugging Face Inference APIβ75Updated 5 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/β¦β27Updated last year
- β44Updated 2 years ago
- β87Updated 3 years ago
- Hugging Face and Pyserini interoperabilityβ19Updated 2 years ago
- See https://github.com/cuda-mode/triton-index/ instead!β10Updated last year