oshizo / gpt_index_japanese_trialLinks
☆19Updated 2 years ago
Alternatives and similar repositories for gpt_index_japanese_trial
Users that are interested in gpt_index_japanese_trial are comparing it to the libraries listed below
Sorting:
- Japanese Realistic Textual Entailment Corpus (NLP 2020, LREC 2020)☆76Updated 2 years ago
- おーぷん2ちゃんねるをクロールして作成した対話コーパス☆98Updated 4 years ago
- 日本語T5モデル☆115Updated 11 months ago
- ☆86Updated 2 years ago
- alpacaデータセットを日本語化したものです☆89Updated 2 years ago
- ☆50Updated last year
- This repository has implementations of data augmentation for NLP for Japanese.☆64Updated 2 years ago
- ☆141Updated 2 years ago
- 📝 A list of pre-trained BERT models for Japanese with word/subword tokenization + vocabulary construction algorithm information☆131Updated 2 years ago
- An integrated Japanese analyzer based on foundation models☆134Updated 2 weeks ago
- Japanese synonym library☆53Updated 3 years ago
- ☆161Updated 4 years ago
- Mecab + NEologd + Docker + Python3☆36Updated 3 years ago
- Code for evaluating Japanese pretrained models provided by NTT Ltd.☆242Updated 2 years ago
- ボケて電笑戦 (bokete DENSHOSEN) Workshop☆42Updated 3 years ago
- Wikipediaを用いた日本語の固有表現抽出データセット☆141Updated last year
- General-purpose Swich transformer based Japanese language model☆118Updated last year
- Japanese Movie Recommendation Dialogue dataset☆28Updated 3 years ago
- The evaluation scripts of JMTEB (Japanese Massive Text Embedding Benchmark)☆72Updated last month
- Japanese tokenizer for Transformers☆79Updated last year
- Accommodation Search Dialog Corpus (宿泊施設探索対話コーパス)☆25Updated last year
- Japanese instruction data (日本語指示データ)☆24Updated 2 years ago
- Exploring Japanese SimCSE☆69Updated last year
- デジタル化資料OCRテキスト化事業において作成されたOCR学習用データセット☆74Updated last year
- LLMとLoRAを用いたテキスト分類☆97Updated 2 years ago
- JMultiWOZ: A Large-Scale Japanese Multi-Domain Task-Oriented Dialogue Dataset, LREC-COLING 2024☆25Updated last year
- Japanese-BPEEncoder☆41Updated 3 years ago
- 敬語変換タスクにおける評価用データセット☆21Updated 2 years ago
- A comparison tool of Japanese tokenizers☆120Updated last year
- DistilBERT model pre-trained on 131 GB of Japanese web text. The teacher model is BERT-base that built in-house at LINE.☆45Updated 2 years ago