rinnakk / prefix-tuning-gptLinks
Example code for prefix-tuning GPT/GPT-NeoX models and for inference with trained prefixes
☆13Updated 2 years ago
Alternatives and similar repositories for prefix-tuning-gpt
Users that are interested in prefix-tuning-gpt are comparing it to the libraries listed below
Sorting:
- ☆43Updated 4 years ago
- A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering☆17Updated 2 years ago
- ☆46Updated 3 years ago
- Lightblue LLM Eval Framework: tengu, elyza100, ja-mtbench, rakuda☆18Updated last month
- Checkpointable dataset utilities for foundation model training☆31Updated last year
- ☆29Updated 3 years ago
- Project of llm evaluation to Japanese tasks☆90Updated last week
- ☆42Updated last year
- ☆11Updated 4 years ago
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆33Updated last year
- ☆61Updated last year
- Convenient Text-to-Text Training for Transformers☆19Updated 3 years ago
- ☆14Updated 3 years ago
- List of papers on Self-Correction of LLMs.☆80Updated 10 months ago
- Japanese LLaMa experiment☆53Updated 2 weeks ago
- COMET-ATOMIC ja☆30Updated last year
- Repo for "Smart Word Suggestions" (SWS) task and benchmark☆20Updated last year
- hllama is a library which aims to provide a set of utility tools for large language models.☆10Updated last year
- LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation☆22Updated last year
- Codes to pre-train Japanese T5 models☆40Updated 4 years ago
- KETOD Knowledge-Enriched Task-Oriented Dialogue☆32Updated 2 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆74Updated last year
- Implementation of autoregressive language model using improved Transformer and DeepSpeed pipeline parallelism.☆32Updated 3 years ago
- ☆50Updated last year
- GPT-jax based on the official huggingface library☆13Updated 4 years ago
- ☆18Updated 10 months ago
- ☆14Updated last year
- Do Multilingual Language Models Think Better in English?☆42Updated 2 years ago
- Pytorch implementation and pre-trained Japanese model for CANINE, the efficient character-level transformer.☆89Updated 2 years ago
- ☆23Updated 2 years ago