AUGMXNT / shisa
☆41Updated 10 months ago
Alternatives and similar repositories for shisa:
Users that are interested in shisa are comparing it to the libraries listed below
- ☆41Updated last year
- Japanese LLaMa experiment☆52Updated 2 months ago
- LEIA: Facilitating Cross-Lingual Knowledge Transfer in Language Models with Entity-based Data Augmentation☆21Updated 9 months ago
- The robust text processing pipeline framework enabling customizable, efficient, and metric-logged text preprocessing.☆120Updated 3 months ago
- ☆14Updated 5 months ago
- ☆22Updated last year
- ☆58Updated 8 months ago
- Project of llm evaluation to Japanese tasks☆80Updated this week
- LLM構築用の日本語チャットデータセット☆80Updated last year
- ☆10Updated 9 months ago
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆31Updated last year
- Mamba training library developed by kotoba technologies☆67Updated last year
- ☆15Updated last year
- Unofficial entropix impl for Gemma2 and Llama and Qwen2 and Mistral☆17Updated last month
- Mixtral-based Ja-En (En-Ja) Translation model☆18Updated last month
- ☆14Updated 10 months ago
- 日本語マルチタスク言語理解ベンチマーク Japanese Massive Multitask Language Understanding Benchmark☆32Updated 2 months ago
- RealPersonaChat: A Realistic Persona Chat Corpus with Interlocutors' Own Personalities☆51Updated 11 months ago
- COMET-ATOMIC ja☆29Updated 11 months ago
- Swallowプロジェクト 大規模言語モデル 評価スクリプト☆14Updated 7 months ago
- Utility scripts for preprocessing Wikipedia texts for NLP☆76Updated 10 months ago
- ☆83Updated last year
- Japanese instruction data (日本語指示データ)☆22Updated last year
- ☆33Updated 6 months ago
- ☆25Updated 3 months ago
- ☆37Updated 6 months ago
- YAST - Yet Another SPLADE or Sparse Trainer☆16Updated this week
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 10 months ago
- Checkpointable dataset utilities for foundation model training☆32Updated last year