AUGMXNT / shisaLinks
☆41Updated last year
Alternatives and similar repositories for shisa
Users that are interested in shisa are comparing it to the libraries listed below
Sorting:
- ☆42Updated last year
- Japanese LLaMa experiment☆53Updated 6 months ago
- ☆16Updated 9 months ago
- LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation☆21Updated last year
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆32Updated last year
- ☆60Updated last year
- The robust text processing pipeline framework enabling customizable, efficient, and metric-logged text preprocessing.☆122Updated this week
- ☆10Updated last year
- Mamba training library developed by kotoba technologies☆71Updated last year
- Project of llm evaluation to Japanese tasks☆83Updated this week
- ☆23Updated last year
- LLM構築用の日本語チャットデータセット☆83Updated last year
- ☆14Updated last year
- Fine-tuning Moshi/J-Moshi on your own spoken dialogue data☆57Updated 2 months ago
- Unofficial entropix impl for Gemma2 and Llama and Qwen2 and Mistral☆17Updated 5 months ago
- Ongoing Research Project for continaual pre-training LLM(dense mode)☆42Updated 3 months ago
- Utility scripts for preprocessing Wikipedia texts for NLP☆77Updated last year
- ☆48Updated 2 years ago
- YAST - Yet Another SPLADE or Sparse Trainer☆18Updated last week
- Swallowプロジェクト 大規模言語モデル 評価スクリプト☆17Updated 2 months ago
- ☆15Updated last year
- Mixtral-based Ja-En (En-Ja) Translation model☆19Updated 5 months ago
- ☆84Updated last year
- ☆33Updated 10 months ago
- COMET-ATOMIC ja☆30Updated last year
- ☆39Updated last year
- Flexible evaluation tool for language models☆46Updated this week
- Japanese instruction data (日本語指示データ)☆24Updated last year
- 日本語マルチタスク言語理解ベンチマーク Japanese Massive Multitask Language Understanding Benchmark☆36Updated 6 months ago
- ☆135Updated last week