rinnakk / prefix-tuning-gptLinks
Example code for prefix-tuning GPT/GPT-NeoX models and for inference with trained prefixes
☆12Updated 2 years ago
Alternatives and similar repositories for prefix-tuning-gpt
Users that are interested in prefix-tuning-gpt are comparing it to the libraries listed below
Sorting:
- ☆46Updated 3 years ago
- Checkpointable dataset utilities for foundation model training☆32Updated last year
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆32Updated last year
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering☆16Updated 2 years ago
- ☆11Updated 3 years ago
- Observe the slow deterioration of my mental sanity in the github commit history☆12Updated 2 years ago
- Codes to pre-train Japanese T5 models☆41Updated 3 years ago
- ☆29Updated 3 years ago
- Japanese LLaMa experiment☆52Updated 6 months ago
- ☆10Updated 2 years ago
- ☆27Updated last month
- LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation☆21Updated last year
- DIRECT: Direct and Indirect REsponses in Conversational Text Corpus☆16Updated 3 years ago
- Utility scripts for preprocessing Wikipedia texts for NLP☆77Updated last year
- KETOD Knowledge-Enriched Task-Oriented Dialogue☆32Updated 2 years ago
- COMET-ATOMIC ja☆29Updated last year
- ☆14Updated 3 years ago
- ☆10Updated last year
- ☆49Updated last year
- Repository of ACL2023 paper: Unbalanced Optimal Transport for Unbalanced Word Alignment☆38Updated last year
- ☆15Updated 3 years ago
- A package for fine tuning of pretrained NLP transformers using Semi Supervised Learning☆14Updated 3 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- ☆41Updated last year
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated last year
- ☆16Updated 6 months ago
- ☆43Updated 3 years ago
- Code & Data for Comparative Opinion Summarization via Collaborative Decoding (Iso et al; Findings of ACL 2022)☆21Updated 3 months ago
- ☆19Updated last year
- ☆44Updated 6 months ago