EleutherAI / best-downloadLinks
URL downloader supporting checkpointing and continuous checksumming.
β19Updated last year
Alternatives and similar repositories for best-download
Users that are interested in best-download are comparing it to the libraries listed below
Sorting:
- **ARCHIVED** Filesystem interface to π€ Hubβ58Updated 2 years ago
- One stop shop for all things carpβ59Updated 2 years ago
- GPT-jax based on the official huggingface libraryβ13Updated 4 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- A library for squeakily cleaning and filtering language datasets.β47Updated last year
- My explorations into editing the knowledge and memories of an attention networkβ35Updated 2 years ago
- Few Shot Learning using EleutherAI's GPT-Neo an Open-source version of GPT-3β18Updated 3 years ago
- β90Updated 2 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning Pβ¦β34Updated last year
- β78Updated last year
- A client library for LAION's effort to filter CommonCrawl with CLIP, building a large scale image-text dataset.β31Updated 2 years ago
- Code repo for "Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers" (ACL 2023)β22Updated last year
- Convenient Text-to-Text Training for Transformersβ19Updated 3 years ago
- See https://github.com/cuda-mode/triton-index/ instead!β11Updated last year
- An attempt to merge ESBN with Transformers, to endow Transformers with the ability to emergently bind symbolsβ16Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixingβ50Updated 3 years ago
- β32Updated 2 years ago
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.β25Updated 2 years ago
- β19Updated 2 years ago
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found heβ¦β31Updated last year
- Experiments with generating opensource language model assistantsβ97Updated 2 years ago
- β44Updated 7 months ago
- Using short models to classify long textsβ21Updated 2 years ago
- Embedding Recycling for Language modelsβ38Updated last year
- Implementation of stop sequencer for Huggingface Transformersβ16Updated 2 years ago
- For experiments involving instruct gpt. Currently used for documenting open research questions.β71Updated 2 years ago
- Helper scripts and notes that were used while porting various nlp modelsβ46Updated 3 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Followingβ79Updated 9 months ago
- β20Updated 4 years ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limitβ63Updated 2 years ago