allenai / tango
Organize your experiments into discrete steps that can be cached and reused throughout the lifetime of your research project.
โ525Updated 3 months ago
Related projects: โ
- Task-based datasets, preprocessing, and evaluation for sequence models.โ552Updated this week
- ๐ค A PyTorch library of curated Transformer models and their composable componentsโ861Updated 5 months ago
- Flexible components pairing ๐ค Transformers with Pytorch Lightningโ610Updated last year
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging Fโฆโ555Updated 10 months ago
- An open collection of implementation tips, tricks and resources for training large language modelsโ455Updated last year
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)โ456Updated last year
- Interpretability for sequence generation models ๐ ๐โ361Updated 3 weeks ago
- Cramming the training of a (BERT-type) language model into limited compute.โ1,284Updated 3 months ago
- Original Implementation of Prompt Tuning from Lester, et al, 2021โ641Updated 3 months ago
- All-in-one text de-duplicationโ585Updated 3 months ago
- Interpretable Evaluation for AI Systemsโ359Updated last year
- Code repository for supporting the paper "Atlas Few-shot Learning with Retrieval Augmented Language Models",(https//arxiv.org/abs/2208.03โฆโ508Updated 9 months ago
- Repository containing code for "How to Train BERT with an Academic Budget" paperโ309Updated last year
- Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled corpus and yields massive improvement: "GPL: โฆโ321Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorchโ850Updated 10 months ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.โ972Updated last month
- โ1,456Updated 3 weeks ago
- maximal update parametrization (ยตP)โ1,334Updated 2 months ago
- Code for T-Few from "Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning"โ424Updated last year
- Code for the ALiBi method for transformer language models (ICLR 2022)โ497Updated 10 months ago
- โ1,099Updated last month
- String-to-String Algorithms for Natural Language Processingโ527Updated last month
- Contriever: Unsupervised Dense Information Retrieval with Contrastive Learningโ658Updated last year
- Seminar on Large Language Models (COMP790-101 at UNC Chapel Hill, Fall 2022)โ309Updated last year
- Long Range Arena for Benchmarking Efficient Transformersโ711Updated 9 months ago
- Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraintโ342Updated 5 months ago
- What would you do with 1000 H100s...โ816Updated 8 months ago
- Fast & Simple repository for pre-training and fine-tuning T5-style modelsโ957Updated 3 weeks ago
- NL-Augmenter ๐ฆ โ ๐ A Collaborative Repository of Natural Language Transformationsโ770Updated 4 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Dayโ248Updated 10 months ago