sonoisa / sentence-transformers
Sentence Embeddings with BERT & XLNet
☆32Updated last year
Related projects ⓘ
Alternatives and complementary repositories for sentence-transformers
- Wikipediaを用いた日本語の固有表現抽出データセット☆132Updated last year
- This repository has implementations of data augmentation for NLP for Japanese.☆64Updated last year
- 自然言語で書かれた時間情報表現を抽出/規格化するルールベースの解析器☆134Updated 9 months ago
- hottoSNS-BERT: 大規模SNSコーパスによる文分散表現モデル☆61Updated last month
- Japanese tokenizer for Transformers☆78Updated 11 months ago
- Japanese Realistic Textual Entailment Corpus (NLP 2020, LREC 2020)☆76Updated last year
- Japanese synonym library☆52Updated 2 years ago
- ☆36Updated 3 years ago
- おーぷん2ちゃんねるをクロールして作成した対話コーパス☆94Updated 3 years ago
- 日本語T5モデル☆113Updated 2 months ago
- ☆30Updated 6 years ago
- 📝 A list of pre-trained BERT models for Japanese with word/subword tokenization + vocabulary construction algorithm information☆129Updated last year
- ☆94Updated last year
- japanese sentence segmentation library for python☆68Updated last year
- Exploring Japanese SimCSE☆62Updated last year
- ☆146Updated last month
- chakki's Aspect-Based Sentiment Analysis dataset☆138Updated 2 years ago
- ☆16Updated 3 years ago
- Samples codes for natural language processing in Japanese☆63Updated last year
- Distributed representations of words and named entities trained on Wikipedia.☆181Updated 3 years ago
- ☆39Updated last year
- Japanese text8 corpus for word embedding.☆110Updated 7 years ago
- Repository for JSICK☆44Updated last year
- A comparison tool of Japanese tokenizers☆119Updated 5 months ago
- hottoSNS-w2v: 日本語大規模SNS+Webコーパスによる単語分散表現モデル☆60Updated 3 years ago
- Japanese-BPEEncoder☆39Updated 3 years ago
- 🛥 Vaporetto is a fast and lightweight pointwise prediction based tokenizer. This is a Python wrapper for Vaporetto.☆21Updated 2 months ago
- tutorial for deep learning dialogue models☆75Updated 2 years ago
- ☆40Updated 3 years ago